Nov 23 14:45:46 crc systemd[1]: Starting Kubernetes Kubelet... Nov 23 14:45:46 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:46 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:47 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 14:45:48 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 23 14:45:49 crc kubenswrapper[4718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 14:45:49 crc kubenswrapper[4718]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 23 14:45:49 crc kubenswrapper[4718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 14:45:49 crc kubenswrapper[4718]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 14:45:49 crc kubenswrapper[4718]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 23 14:45:49 crc kubenswrapper[4718]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.881355 4718 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887152 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887186 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887195 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887203 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887213 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887222 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887230 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887238 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887246 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887254 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887262 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887271 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887279 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887286 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887294 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887302 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887310 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887318 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887326 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887334 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887342 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887350 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887357 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887365 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887372 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887380 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887388 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887396 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887405 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887415 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887423 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887431 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887469 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887480 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887491 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887502 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887510 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887518 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887527 4718 feature_gate.go:330] unrecognized feature gate: Example Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887535 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887543 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887551 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887561 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887572 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887582 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887591 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887601 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887609 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887617 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887626 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887633 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887642 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887649 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887657 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887665 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887672 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887680 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887688 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887695 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887706 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887715 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887725 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887733 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887743 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887751 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887758 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887766 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887774 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887782 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887790 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.887800 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888000 4718 flags.go:64] FLAG: --address="0.0.0.0" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888029 4718 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888047 4718 flags.go:64] FLAG: --anonymous-auth="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888061 4718 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888083 4718 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888095 4718 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888110 4718 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888126 4718 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888136 4718 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888146 4718 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888156 4718 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888166 4718 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888175 4718 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888184 4718 flags.go:64] FLAG: --cgroup-root="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888193 4718 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888202 4718 flags.go:64] FLAG: --client-ca-file="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888211 4718 flags.go:64] FLAG: --cloud-config="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888220 4718 flags.go:64] FLAG: --cloud-provider="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888229 4718 flags.go:64] FLAG: --cluster-dns="[]" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888241 4718 flags.go:64] FLAG: --cluster-domain="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888250 4718 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888259 4718 flags.go:64] FLAG: --config-dir="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888268 4718 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888277 4718 flags.go:64] FLAG: --container-log-max-files="5" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888289 4718 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888299 4718 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888309 4718 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888318 4718 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888327 4718 flags.go:64] FLAG: --contention-profiling="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888335 4718 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888345 4718 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888355 4718 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888364 4718 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888386 4718 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888408 4718 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888425 4718 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.888476 4718 flags.go:64] FLAG: --enable-load-reader="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893344 4718 flags.go:64] FLAG: --enable-server="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893378 4718 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893394 4718 flags.go:64] FLAG: --event-burst="100" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893404 4718 flags.go:64] FLAG: --event-qps="50" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893413 4718 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893423 4718 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893462 4718 flags.go:64] FLAG: --eviction-hard="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893475 4718 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893484 4718 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893493 4718 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893503 4718 flags.go:64] FLAG: --eviction-soft="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893512 4718 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893521 4718 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893530 4718 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893540 4718 flags.go:64] FLAG: --experimental-mounter-path="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893549 4718 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893558 4718 flags.go:64] FLAG: --fail-swap-on="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893567 4718 flags.go:64] FLAG: --feature-gates="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893577 4718 flags.go:64] FLAG: --file-check-frequency="20s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893586 4718 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893596 4718 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893605 4718 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893617 4718 flags.go:64] FLAG: --healthz-port="10248" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893628 4718 flags.go:64] FLAG: --help="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893640 4718 flags.go:64] FLAG: --hostname-override="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893652 4718 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893664 4718 flags.go:64] FLAG: --http-check-frequency="20s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893676 4718 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893687 4718 flags.go:64] FLAG: --image-credential-provider-config="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893698 4718 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893709 4718 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893720 4718 flags.go:64] FLAG: --image-service-endpoint="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893732 4718 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893743 4718 flags.go:64] FLAG: --kube-api-burst="100" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893753 4718 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893765 4718 flags.go:64] FLAG: --kube-api-qps="50" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893776 4718 flags.go:64] FLAG: --kube-reserved="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893788 4718 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893800 4718 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893812 4718 flags.go:64] FLAG: --kubelet-cgroups="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893823 4718 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893834 4718 flags.go:64] FLAG: --lock-file="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893848 4718 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893860 4718 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893872 4718 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893888 4718 flags.go:64] FLAG: --log-json-split-stream="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893899 4718 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893909 4718 flags.go:64] FLAG: --log-text-split-stream="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893919 4718 flags.go:64] FLAG: --logging-format="text" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893928 4718 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893938 4718 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893947 4718 flags.go:64] FLAG: --manifest-url="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893957 4718 flags.go:64] FLAG: --manifest-url-header="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893969 4718 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893978 4718 flags.go:64] FLAG: --max-open-files="1000000" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893989 4718 flags.go:64] FLAG: --max-pods="110" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.893998 4718 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894006 4718 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894016 4718 flags.go:64] FLAG: --memory-manager-policy="None" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894025 4718 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894034 4718 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894044 4718 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894053 4718 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894072 4718 flags.go:64] FLAG: --node-status-max-images="50" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894082 4718 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894091 4718 flags.go:64] FLAG: --oom-score-adj="-999" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894100 4718 flags.go:64] FLAG: --pod-cidr="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894108 4718 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894121 4718 flags.go:64] FLAG: --pod-manifest-path="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894132 4718 flags.go:64] FLAG: --pod-max-pids="-1" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894141 4718 flags.go:64] FLAG: --pods-per-core="0" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894150 4718 flags.go:64] FLAG: --port="10250" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894159 4718 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894168 4718 flags.go:64] FLAG: --provider-id="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894177 4718 flags.go:64] FLAG: --qos-reserved="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894186 4718 flags.go:64] FLAG: --read-only-port="10255" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894195 4718 flags.go:64] FLAG: --register-node="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894203 4718 flags.go:64] FLAG: --register-schedulable="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894214 4718 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894229 4718 flags.go:64] FLAG: --registry-burst="10" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894238 4718 flags.go:64] FLAG: --registry-qps="5" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894246 4718 flags.go:64] FLAG: --reserved-cpus="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894255 4718 flags.go:64] FLAG: --reserved-memory="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894266 4718 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894276 4718 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894286 4718 flags.go:64] FLAG: --rotate-certificates="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894295 4718 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894304 4718 flags.go:64] FLAG: --runonce="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894313 4718 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894322 4718 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894331 4718 flags.go:64] FLAG: --seccomp-default="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894340 4718 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894349 4718 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894358 4718 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894367 4718 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894376 4718 flags.go:64] FLAG: --storage-driver-password="root" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894385 4718 flags.go:64] FLAG: --storage-driver-secure="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894394 4718 flags.go:64] FLAG: --storage-driver-table="stats" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894402 4718 flags.go:64] FLAG: --storage-driver-user="root" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894411 4718 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894420 4718 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894430 4718 flags.go:64] FLAG: --system-cgroups="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894472 4718 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894486 4718 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894495 4718 flags.go:64] FLAG: --tls-cert-file="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894504 4718 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894517 4718 flags.go:64] FLAG: --tls-min-version="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894526 4718 flags.go:64] FLAG: --tls-private-key-file="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894601 4718 flags.go:64] FLAG: --topology-manager-policy="none" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894610 4718 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894619 4718 flags.go:64] FLAG: --topology-manager-scope="container" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894628 4718 flags.go:64] FLAG: --v="2" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894640 4718 flags.go:64] FLAG: --version="false" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894651 4718 flags.go:64] FLAG: --vmodule="" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894662 4718 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.894672 4718 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894933 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894946 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894956 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894965 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894974 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894982 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894990 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.894997 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895005 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895012 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895020 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895028 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895035 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895043 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895051 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895059 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895067 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895075 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895083 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895091 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895099 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895106 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895115 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895125 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895136 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895145 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895153 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895161 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895169 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895179 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895189 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895199 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895209 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895219 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895231 4718 feature_gate.go:330] unrecognized feature gate: Example Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895241 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895252 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895261 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895270 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895278 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895286 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895295 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895302 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895310 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895318 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895325 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895333 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895341 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895348 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895357 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895364 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895372 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895380 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895388 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895395 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895403 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895412 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895422 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895467 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895480 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895489 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895498 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895506 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895514 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895522 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895532 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895542 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895552 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895562 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895572 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.895584 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.895600 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.910365 4718 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.910408 4718 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910644 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910667 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910678 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910689 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910699 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910708 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910717 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910725 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910733 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910741 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910749 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910757 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910765 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910776 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910790 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910811 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910822 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910837 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910849 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910864 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910876 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910886 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910895 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910903 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910911 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910924 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910933 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910941 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910948 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910956 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910964 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910971 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910979 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910986 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.910994 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911002 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911009 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911017 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911024 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911032 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911040 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911048 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911055 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911063 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911070 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911080 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911090 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911099 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911108 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911116 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911124 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911132 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911140 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911147 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911155 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911163 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911171 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911178 4718 feature_gate.go:330] unrecognized feature gate: Example Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911186 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911193 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911201 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911210 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911218 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911225 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911232 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911240 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911248 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911255 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911263 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911270 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911279 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.911291 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911569 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911593 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911603 4718 feature_gate.go:330] unrecognized feature gate: Example Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911613 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911624 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911634 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911644 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911653 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911662 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911670 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911679 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911687 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911695 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911703 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911711 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911720 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911728 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911737 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911745 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911753 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911763 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911773 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911782 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911792 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911800 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911811 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911820 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911828 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911835 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911843 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911851 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911859 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911866 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911874 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911882 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911889 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911897 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911905 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911913 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911924 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911933 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911941 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911949 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911956 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911964 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911972 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911980 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911987 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.911995 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912002 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912010 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912018 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912025 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912033 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912040 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912048 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912056 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912063 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912071 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912078 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912086 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912095 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912102 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912110 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912118 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912126 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912133 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912141 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912149 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912156 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 14:45:49 crc kubenswrapper[4718]: W1123 14:45:49.912164 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.912175 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.916477 4718 server.go:940] "Client rotation is on, will bootstrap in background" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.926244 4718 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.926418 4718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.939393 4718 server.go:997] "Starting client certificate rotation" Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.939478 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.939790 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-13 09:16:01.481683557 +0000 UTC Nov 23 14:45:49 crc kubenswrapper[4718]: I1123 14:45:49.939907 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.074357 4718 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.085019 4718 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.098039 4718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.131898 4718 log.go:25] "Validated CRI v1 runtime API" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.245215 4718 log.go:25] "Validated CRI v1 image API" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.251105 4718 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.267301 4718 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-23-14-36-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.267359 4718 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.302622 4718 manager.go:217] Machine: {Timestamp:2025-11-23 14:45:50.291757563 +0000 UTC m=+1.531377487 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:18ae8787-5c21-4432-a923-66f25f4a0fdf BootID:615b60d9-25a6-45a0-a365-4c423e4d937a Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:98:2e:de Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:98:2e:de Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cf:e7:e1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3e:bb:ae Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d6:d5:9c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3e:35:1d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:0e:e3:bb:b1:4e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:23:8e:25:e8:ba Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.303043 4718 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.303258 4718 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.310481 4718 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.310869 4718 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.310930 4718 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.311230 4718 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.311248 4718 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.311830 4718 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.311874 4718 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.312166 4718 state_mem.go:36] "Initialized new in-memory state store" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.312303 4718 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.317899 4718 kubelet.go:418] "Attempting to sync node with API server" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.317934 4718 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.317959 4718 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.317980 4718 kubelet.go:324] "Adding apiserver pod source" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.318110 4718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.329268 4718 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.330494 4718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.333213 4718 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 14:45:50 crc kubenswrapper[4718]: W1123 14:45:50.341832 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.341916 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.341954 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.341969 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.341966 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.341988 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342011 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342026 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342039 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342062 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342089 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342103 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342145 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342160 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342194 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 23 14:45:50 crc kubenswrapper[4718]: W1123 14:45:50.342144 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.342282 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.342822 4718 server.go:1280] "Started kubelet" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.344587 4718 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.344635 4718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.344825 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.345863 4718 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 14:45:50 crc systemd[1]: Started Kubernetes Kubelet. Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.354417 4718 server.go:460] "Adding debug handlers to kubelet server" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.363867 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.363947 4718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.366426 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.366700 4718 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.366766 4718 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.366873 4718 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.367832 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:20:52.486908135 +0000 UTC Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.367991 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 774h35m2.118921921s for next certificate rotation Nov 23 14:45:50 crc kubenswrapper[4718]: W1123 14:45:50.368214 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.368372 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.368322 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="200ms" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.372810 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aaa06d23dea1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 14:45:50.342769182 +0000 UTC m=+1.582389076,LastTimestamp:2025-11-23 14:45:50.342769182 +0000 UTC m=+1.582389076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.379261 4718 factory.go:55] Registering systemd factory Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.379299 4718 factory.go:221] Registration of the systemd container factory successfully Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.379706 4718 factory.go:153] Registering CRI-O factory Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.379726 4718 factory.go:221] Registration of the crio container factory successfully Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.379804 4718 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.379831 4718 factory.go:103] Registering Raw factory Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.379851 4718 manager.go:1196] Started watching for new ooms in manager Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.380576 4718 manager.go:319] Starting recovery of all containers Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.400787 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401497 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401533 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401562 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401586 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401646 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401668 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401688 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401719 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401739 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401763 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401782 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401801 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401844 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401865 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401883 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401915 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401936 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401955 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.401985 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402004 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402024 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402044 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402076 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402105 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402127 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402151 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402183 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402209 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402245 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402270 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402308 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402383 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402413 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402432 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402483 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402501 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402522 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402545 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402566 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402619 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402639 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402659 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402679 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402698 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402718 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402738 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402758 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402782 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402801 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402822 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402843 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402870 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402893 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402926 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402950 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.402971 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403002 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403023 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403044 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403064 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403083 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403101 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403120 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403143 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403162 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403182 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403202 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403220 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403240 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403259 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403280 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403297 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403318 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403340 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403359 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403378 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403397 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403430 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403486 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403508 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403530 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403551 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403570 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403590 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403611 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403630 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403650 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403670 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403689 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403709 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403728 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403746 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.403768 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.405458 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.406535 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.406734 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.406887 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407027 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407137 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407250 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407369 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407548 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407666 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407799 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.407935 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408060 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408182 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408290 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408402 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408558 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408712 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408826 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.408926 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.409049 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.409162 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.409287 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.409461 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.409573 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.409725 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.409838 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.410563 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.410740 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.410770 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.410807 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.410852 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.411875 4718 manager.go:324] Recovery completed Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428522 4718 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428657 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428689 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428714 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428739 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428761 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428784 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428806 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428829 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428861 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428884 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428908 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428931 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.428994 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.429149 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.429184 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.429538 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.429616 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.429639 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.429664 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.429688 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430134 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430181 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430203 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430225 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430254 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430305 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430341 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430372 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430402 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430430 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430527 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430545 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430570 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430592 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430617 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430637 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430659 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430680 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430701 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430714 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430732 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430748 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430765 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430784 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430800 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430815 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430831 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430847 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430861 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430876 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430891 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430905 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430920 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430938 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430952 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430966 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430981 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.430997 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431012 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431027 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431049 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431063 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431078 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431092 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431107 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431122 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431137 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431151 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431164 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431180 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431193 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431209 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431224 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431238 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431254 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431268 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431283 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431298 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431314 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431328 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431342 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431356 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431369 4718 reconstruct.go:97] "Volume reconstruction finished" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.431378 4718 reconciler.go:26] "Reconciler: start to sync state" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.432606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.432746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.432769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.433957 4718 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.434067 4718 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.434095 4718 state_mem.go:36] "Initialized new in-memory state store" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.435680 4718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.439669 4718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.439720 4718 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.439748 4718 kubelet.go:2335] "Starting kubelet main sync loop" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.439814 4718 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 14:45:50 crc kubenswrapper[4718]: W1123 14:45:50.440589 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.440662 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.466825 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.540257 4718 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.567568 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.569403 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="400ms" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.608645 4718 policy_none.go:49] "None policy: Start" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.611230 4718 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.611262 4718 state_mem.go:35] "Initializing new in-memory state store" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.667713 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.735190 4718 manager.go:334] "Starting Device Plugin manager" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.735254 4718 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.735266 4718 server.go:79] "Starting device plugin registration server" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.738193 4718 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.738211 4718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.738326 4718 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.738425 4718 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.738433 4718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.740931 4718 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.741024 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.741941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.741975 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.741983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742106 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742310 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742342 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742900 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.742992 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.743015 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.744267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.744288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.744295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.747001 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.747722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.747741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.747750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.747908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.747918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.747925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.748008 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.748329 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.748364 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749356 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749681 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749706 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749975 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.749984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753274 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753299 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.753974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836307 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836338 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836371 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836390 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836406 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836449 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836464 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836477 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836507 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836534 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836551 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.836578 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.840026 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.841097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.841125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.841133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.841151 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.841363 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938103 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938183 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938215 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938248 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938275 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938312 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938370 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938402 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938458 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938476 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938477 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938488 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938506 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938517 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938521 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938546 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938568 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938590 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938604 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938675 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938734 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938536 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938711 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938785 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938799 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: I1123 14:45:50.938829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 14:45:50 crc kubenswrapper[4718]: E1123 14:45:50.971087 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="800ms" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.042277 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.044000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.044054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.044072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.044110 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:45:51 crc kubenswrapper[4718]: E1123 14:45:51.044658 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.074479 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.104732 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.115580 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.132710 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.138460 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.283619 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:51 crc kubenswrapper[4718]: E1123 14:45:51.283744 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.314312 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-054b08b7e973780f15b18f256f4df7483584d8224265c592d4136be4cc19badc WatchSource:0}: Error finding container 054b08b7e973780f15b18f256f4df7483584d8224265c592d4136be4cc19badc: Status 404 returned error can't find the container with id 054b08b7e973780f15b18f256f4df7483584d8224265c592d4136be4cc19badc Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.316573 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-013d1db3809aa5051e0971cb9f0694cdfac4975e5872b4550c68d1652e50cb66 WatchSource:0}: Error finding container 013d1db3809aa5051e0971cb9f0694cdfac4975e5872b4550c68d1652e50cb66: Status 404 returned error can't find the container with id 013d1db3809aa5051e0971cb9f0694cdfac4975e5872b4550c68d1652e50cb66 Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.318048 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f4b6510925f40d6cd21ecfdf4c837de1a2823428a2b652314c04054ac413dfa4 WatchSource:0}: Error finding container f4b6510925f40d6cd21ecfdf4c837de1a2823428a2b652314c04054ac413dfa4: Status 404 returned error can't find the container with id f4b6510925f40d6cd21ecfdf4c837de1a2823428a2b652314c04054ac413dfa4 Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.319836 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6283c1118bb33dda24a3cfa4af868a138c9282959da10c704cef2340a2d023ba WatchSource:0}: Error finding container 6283c1118bb33dda24a3cfa4af868a138c9282959da10c704cef2340a2d023ba: Status 404 returned error can't find the container with id 6283c1118bb33dda24a3cfa4af868a138c9282959da10c704cef2340a2d023ba Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.323541 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-167f2c7a8757c093aebeea270b1aa6bc400d2237d312f0d6ad189891943ebc6f WatchSource:0}: Error finding container 167f2c7a8757c093aebeea270b1aa6bc400d2237d312f0d6ad189891943ebc6f: Status 404 returned error can't find the container with id 167f2c7a8757c093aebeea270b1aa6bc400d2237d312f0d6ad189891943ebc6f Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.346111 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.445000 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.445888 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"167f2c7a8757c093aebeea270b1aa6bc400d2237d312f0d6ad189891943ebc6f"} Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.446229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.446283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.446301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.446334 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:45:51 crc kubenswrapper[4718]: E1123 14:45:51.446955 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.448155 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6283c1118bb33dda24a3cfa4af868a138c9282959da10c704cef2340a2d023ba"} Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.449844 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"013d1db3809aa5051e0971cb9f0694cdfac4975e5872b4550c68d1652e50cb66"} Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.451984 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"054b08b7e973780f15b18f256f4df7483584d8224265c592d4136be4cc19badc"} Nov 23 14:45:51 crc kubenswrapper[4718]: I1123 14:45:51.453337 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f4b6510925f40d6cd21ecfdf4c837de1a2823428a2b652314c04054ac413dfa4"} Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.748403 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:51 crc kubenswrapper[4718]: E1123 14:45:51.748589 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:51 crc kubenswrapper[4718]: E1123 14:45:51.772042 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="1.6s" Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.777957 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:51 crc kubenswrapper[4718]: E1123 14:45:51.778070 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:51 crc kubenswrapper[4718]: W1123 14:45:51.876615 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:51 crc kubenswrapper[4718]: E1123 14:45:51.876735 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:52 crc kubenswrapper[4718]: I1123 14:45:52.164386 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 14:45:52 crc kubenswrapper[4718]: E1123 14:45:52.166550 4718 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:52 crc kubenswrapper[4718]: I1123 14:45:52.247714 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:52 crc kubenswrapper[4718]: I1123 14:45:52.249672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:52 crc kubenswrapper[4718]: I1123 14:45:52.249729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:52 crc kubenswrapper[4718]: I1123 14:45:52.249751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:52 crc kubenswrapper[4718]: I1123 14:45:52.249792 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:45:52 crc kubenswrapper[4718]: E1123 14:45:52.250498 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 23 14:45:52 crc kubenswrapper[4718]: I1123 14:45:52.346192 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:53 crc kubenswrapper[4718]: I1123 14:45:53.347152 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:53 crc kubenswrapper[4718]: E1123 14:45:53.373403 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="3.2s" Nov 23 14:45:53 crc kubenswrapper[4718]: W1123 14:45:53.463363 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:53 crc kubenswrapper[4718]: E1123 14:45:53.463948 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:53 crc kubenswrapper[4718]: I1123 14:45:53.851640 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:53 crc kubenswrapper[4718]: I1123 14:45:53.854516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:53 crc kubenswrapper[4718]: I1123 14:45:53.854575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:53 crc kubenswrapper[4718]: I1123 14:45:53.854594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:53 crc kubenswrapper[4718]: I1123 14:45:53.854629 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:45:53 crc kubenswrapper[4718]: E1123 14:45:53.855217 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 23 14:45:54 crc kubenswrapper[4718]: W1123 14:45:54.055626 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:54 crc kubenswrapper[4718]: E1123 14:45:54.055782 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:54 crc kubenswrapper[4718]: W1123 14:45:54.239045 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:54 crc kubenswrapper[4718]: E1123 14:45:54.239129 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.346491 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.467312 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c" exitCode=0 Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.467506 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c"} Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.467643 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.469272 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.469339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.469363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.471813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10"} Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.471877 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396"} Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.472399 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.473537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.473587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.473606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.474383 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b5f39696d69ffc80b7749fd7ecf73c9219ea417504720ff5e26f35a2b1d637a9" exitCode=0 Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.474537 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b5f39696d69ffc80b7749fd7ecf73c9219ea417504720ff5e26f35a2b1d637a9"} Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.474579 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.475779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.475831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.475850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.476531 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3f8f18c498c78029693d699412f511a760b5b633de5ee6ebec7f71abcea1aa5f" exitCode=0 Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.476576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3f8f18c498c78029693d699412f511a760b5b633de5ee6ebec7f71abcea1aa5f"} Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.476642 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.477954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.477985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.477994 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.479298 4718 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a" exitCode=0 Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.479360 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a"} Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.479544 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.480840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.480894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:54 crc kubenswrapper[4718]: I1123 14:45:54.480911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:54 crc kubenswrapper[4718]: W1123 14:45:54.875335 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:54 crc kubenswrapper[4718]: E1123 14:45:54.875529 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.347497 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.489509 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0"} Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.496958 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7"} Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.500835 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d"} Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.500874 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0"} Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.500922 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.502184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.502242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.502260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.503599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6e24b86aa62a28ed7b8b009c4899f8e97080338877869c5f39a8861fa348e2a3"} Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.503739 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.505545 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.505596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.505614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.507554 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ae197633d78076184167ffd812868272b8be041c8ad9e2fc56d54d8ed5cd7e73" exitCode=0 Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.507609 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ae197633d78076184167ffd812868272b8be041c8ad9e2fc56d54d8ed5cd7e73"} Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.507699 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.508775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.508828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:55 crc kubenswrapper[4718]: I1123 14:45:55.508848 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.346229 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.465565 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 14:45:56 crc kubenswrapper[4718]: E1123 14:45:56.466884 4718 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.514163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf"} Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.514221 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7"} Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.517484 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"780ffe920ee499dfcc85eb6fb05256e8e66d53cb3405bcc304d608bd011460a0"} Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.517475 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="780ffe920ee499dfcc85eb6fb05256e8e66d53cb3405bcc304d608bd011460a0" exitCode=0 Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.517625 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.518871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.518924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.518941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.526252 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.526393 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.526254 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7"} Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.526661 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530"} Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.527938 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.527965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.528003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.528015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.528031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:56 crc kubenswrapper[4718]: I1123 14:45:56.528038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:56 crc kubenswrapper[4718]: E1123 14:45:56.575204 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="6.4s" Nov 23 14:45:56 crc kubenswrapper[4718]: W1123 14:45:56.854279 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:56 crc kubenswrapper[4718]: E1123 14:45:56.854421 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.055665 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.058059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.058128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.058149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.058196 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:45:57 crc kubenswrapper[4718]: E1123 14:45:57.058939 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 23 14:45:57 crc kubenswrapper[4718]: E1123 14:45:57.194428 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aaa06d23dea1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 14:45:50.342769182 +0000 UTC m=+1.582389076,LastTimestamp:2025-11-23 14:45:50.342769182 +0000 UTC m=+1.582389076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.346064 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.535197 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5"} Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.542279 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a90f00d571d0e6adc9a76ef89739e8c33c11a9d2064500c87aef98e1633af563"} Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.542318 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.543820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.543903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:57 crc kubenswrapper[4718]: I1123 14:45:57.543923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:57 crc kubenswrapper[4718]: W1123 14:45:57.899918 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:57 crc kubenswrapper[4718]: E1123 14:45:57.900043 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.132756 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.133046 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.134868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.134920 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.134933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.345942 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:58 crc kubenswrapper[4718]: W1123 14:45:58.396234 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:58 crc kubenswrapper[4718]: E1123 14:45:58.396348 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.549340 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7486a4a6af7b8defa077d6084d60dcb97b57489098baee4fabee8b20c2728661"} Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.549401 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.550557 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.550594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.550606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.554052 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b3135ef1e720022c5c8840c06e9fd4b9b22fa85220e89c94b0504c25b1f8549"} Nov 23 14:45:58 crc kubenswrapper[4718]: I1123 14:45:58.554093 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a4f5a5f59e7490aa29c34158b488104fb5b7a97c4d592b798020480ec6512a13"} Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.346436 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.565280 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.567799 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"064801ca5a1264542db6002ef94ea9fff905928bcbd8637c7c1b6aa72f8080b9"} Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.567972 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c6ede509d54bf79c21ef203d95bff038b24fb6a06742d0d37c8ec4d7739bb4d8"} Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.568128 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.573522 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.573578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:45:59 crc kubenswrapper[4718]: I1123 14:45:59.573600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.000220 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.000402 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.000514 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.346827 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.509142 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.509388 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.511070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.511123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.511146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.569938 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.572588 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7486a4a6af7b8defa077d6084d60dcb97b57489098baee4fabee8b20c2728661" exitCode=255 Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.572654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7486a4a6af7b8defa077d6084d60dcb97b57489098baee4fabee8b20c2728661"} Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.572740 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.572777 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.574496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.574556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.574576 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.574595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.574629 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.574645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.575296 4718 scope.go:117] "RemoveContainer" containerID="7486a4a6af7b8defa077d6084d60dcb97b57489098baee4fabee8b20c2728661" Nov 23 14:46:00 crc kubenswrapper[4718]: I1123 14:46:00.607735 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:00 crc kubenswrapper[4718]: E1123 14:46:00.747097 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.133358 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.133498 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.580661 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.583516 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.583596 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21"} Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.583742 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.585312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.585361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.585378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.585693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.585750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.585770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.949657 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:01 crc kubenswrapper[4718]: I1123 14:46:01.975231 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:02 crc kubenswrapper[4718]: I1123 14:46:02.586190 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:02 crc kubenswrapper[4718]: I1123 14:46:02.587979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:02 crc kubenswrapper[4718]: I1123 14:46:02.588023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:02 crc kubenswrapper[4718]: I1123 14:46:02.588045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.459881 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.461783 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.461839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.461857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.461890 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.588311 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.589376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.589417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.589470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.728348 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.728654 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.730081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.730129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:03 crc kubenswrapper[4718]: I1123 14:46:03.730146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.326545 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.336655 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.591796 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.592912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.592966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.592980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.598325 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.750849 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.825855 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.826133 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.827849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.827909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.827927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:04 crc kubenswrapper[4718]: I1123 14:46:04.882848 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:46:05 crc kubenswrapper[4718]: I1123 14:46:05.594806 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:05 crc kubenswrapper[4718]: I1123 14:46:05.596075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:05 crc kubenswrapper[4718]: I1123 14:46:05.596146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:05 crc kubenswrapper[4718]: I1123 14:46:05.596170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:06 crc kubenswrapper[4718]: I1123 14:46:06.597142 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:06 crc kubenswrapper[4718]: I1123 14:46:06.599231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:06 crc kubenswrapper[4718]: I1123 14:46:06.599290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:06 crc kubenswrapper[4718]: I1123 14:46:06.599308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:08 crc kubenswrapper[4718]: I1123 14:46:08.634479 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 14:46:08 crc kubenswrapper[4718]: I1123 14:46:08.634577 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.135320 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.135574 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.137005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.137057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.137075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.143407 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.195720 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.195954 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.197422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.197530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.197548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.319337 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.606625 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.606634 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.608046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.608088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.608099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.608086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.608281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.608308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:10 crc kubenswrapper[4718]: I1123 14:46:10.629379 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 23 14:46:10 crc kubenswrapper[4718]: E1123 14:46:10.747431 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.134269 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.134370 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.609120 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.610383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.610472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.610492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.949504 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 23 14:46:11 crc kubenswrapper[4718]: I1123 14:46:11.949580 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 23 14:46:13 crc kubenswrapper[4718]: E1123 14:46:13.600689 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Nov 23 14:46:13 crc kubenswrapper[4718]: I1123 14:46:13.602769 4718 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 23 14:46:13 crc kubenswrapper[4718]: E1123 14:46:13.617327 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 23 14:46:13 crc kubenswrapper[4718]: I1123 14:46:13.617410 4718 trace.go:236] Trace[1101231458]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 14:46:00.939) (total time: 12677ms): Nov 23 14:46:13 crc kubenswrapper[4718]: Trace[1101231458]: ---"Objects listed" error: 12677ms (14:46:13.617) Nov 23 14:46:13 crc kubenswrapper[4718]: Trace[1101231458]: [12.677481518s] [12.677481518s] END Nov 23 14:46:13 crc kubenswrapper[4718]: I1123 14:46:13.618282 4718 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 23 14:46:13 crc kubenswrapper[4718]: I1123 14:46:13.630345 4718 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 23 14:46:13 crc kubenswrapper[4718]: I1123 14:46:13.630713 4718 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 23 14:46:13 crc kubenswrapper[4718]: I1123 14:46:13.636314 4718 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 23 14:46:13 crc kubenswrapper[4718]: I1123 14:46:13.651129 4718 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.334032 4718 apiserver.go:52] "Watching apiserver" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.339119 4718 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.339665 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.340205 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.340326 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.340768 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.340529 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.340764 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.340496 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.340871 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.340976 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.341125 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.347504 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.347560 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.347961 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.348495 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.348942 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.349218 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.349623 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.350074 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.350678 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.368678 4718 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.433642 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434531 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434588 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434624 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434657 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434723 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434754 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434787 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434820 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434886 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434922 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434954 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.434990 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435013 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435021 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435088 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435133 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435161 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435190 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435217 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435246 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435285 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435310 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435336 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435363 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435427 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435484 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435588 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435622 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435652 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435685 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435755 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435823 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435849 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435875 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435898 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435920 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435943 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435963 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.435986 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436008 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436027 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436048 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436049 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436067 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436169 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436225 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436269 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436278 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436330 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436400 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436490 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436546 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436646 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436697 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436746 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436800 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436979 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.437029 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444031 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444200 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444262 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444325 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444385 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444474 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444544 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444606 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444665 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444728 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444869 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.444938 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445001 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445056 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445121 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445186 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445247 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445302 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445369 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445434 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445599 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445695 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445756 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445808 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445867 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445930 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.445992 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.446046 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.446111 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.446177 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.446252 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.447843 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.447925 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448001 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448063 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448129 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448192 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448254 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448319 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448387 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448482 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448550 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448617 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448861 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448917 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.448983 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449049 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449108 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449172 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449239 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449293 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449367 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449475 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449549 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449638 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449714 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449781 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449835 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449903 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450039 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450098 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450167 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450234 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450290 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450356 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450424 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450547 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450607 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450671 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450733 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450858 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450933 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450987 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451052 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451113 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451174 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451260 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451321 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451376 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451471 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451660 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451731 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451788 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451860 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451919 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451965 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452004 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452050 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452095 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452133 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452176 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452220 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452262 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452301 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452346 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452392 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452430 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436327 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436751 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.453762 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.454235 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.454337 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.455171 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.459921 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.460167 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.460673 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.460670 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.461150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.462007 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.462047 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.462620 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.463078 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.463552 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.464321 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.470713 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.436811 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.437020 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.443919 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449258 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.449977 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450371 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.450421 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.451864 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.452235 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.453244 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.472879 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.473254 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.474123 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.474642 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.475507 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.475611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.479740 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.479770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.479884 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.479927 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.479964 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.479999 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.480024 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.480051 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.480173 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481771 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481839 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481885 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481873 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481913 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481937 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481966 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.481995 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482018 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482044 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482072 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482101 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482128 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482154 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482567 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482607 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.482680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483233 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483278 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483596 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483631 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483663 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483693 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483720 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483745 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483773 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483800 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483823 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483881 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483906 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483932 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.483998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.489259 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490591 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490558 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490625 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490655 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490717 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490750 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490776 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490804 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490837 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490877 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490915 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.490930 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491033 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491051 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491068 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491091 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491109 4718 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491124 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491139 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491156 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491170 4718 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491162 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491188 4718 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491357 4718 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491373 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491388 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491403 4718 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491420 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491503 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491514 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491526 4718 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491547 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491559 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491572 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491522 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491533 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491545 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491834 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491828 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491869 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491893 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.491916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.492228 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.492566 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.492741 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.492928 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.492999 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.493202 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.493230 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.493260 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.493518 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.493612 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.493921 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.493981 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.494158 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.494216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.494507 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.494554 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.494775 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.494990 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.494698 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:14.994678839 +0000 UTC m=+26.234298683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.495017 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.454976 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.495219 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.495268 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.495399 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.495687 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.495724 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.495789 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.496019 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.496241 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.496281 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.496665 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.496873 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.497375 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.497412 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.497836 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.497924 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.497043 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.498820 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.499318 4718 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.501326 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.502681 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.502938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.502970 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.503144 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.503849 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:15.003782379 +0000 UTC m=+26.243402263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.504524 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.504923 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.504928 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.505368 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.505804 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.506046 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.506372 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.506988 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.507208 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.507281 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.507731 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.509354 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.505107 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.521267 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:15.02122992 +0000 UTC m=+26.260849774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.521581 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.521791 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.521797 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.521831 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.521852 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.521870 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.522065 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.522329 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.522572 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.522680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.522864 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.523066 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.523410 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.523640 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.523854 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.523870 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.523952 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.524095 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.524235 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.526033 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.528740 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.528779 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.528799 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.535318 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:15.035283104 +0000 UTC m=+26.274902948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.535368 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:15.035357426 +0000 UTC m=+26.274977380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.535941 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.535976 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.536198 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.536283 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.536379 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.536493 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.536546 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.536690 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.537164 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.537212 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.537525 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.538684 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.538726 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.538732 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.539022 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.544427 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.544567 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.545150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.545428 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.545460 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.545872 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.547024 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.547457 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.548257 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.548289 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.548407 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.548784 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.548779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.548919 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549345 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549361 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549554 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549609 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549718 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549819 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549835 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.549290 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550005 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550192 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550215 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550394 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550498 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550516 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550608 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.550718 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551061 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551346 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551453 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551607 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551878 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551930 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551914 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551887 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551870 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551983 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.551991 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.552117 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.552202 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.552372 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.552489 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.552707 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.552923 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.553385 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.553664 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.554314 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.554761 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.555567 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.555980 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.556557 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.557737 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.557799 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.558728 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.558741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.559062 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.559149 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.559201 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.559348 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.559516 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.559584 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.559717 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.560153 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.561787 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.562196 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.562956 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.564303 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.565195 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.566075 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.567507 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.568289 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.568990 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.570370 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.571531 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.572314 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.580918 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.581608 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.581808 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.583004 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.584029 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.584580 4718 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.584705 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.585098 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.586891 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.587216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.587374 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.588178 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.589730 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.590339 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.591227 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.591891 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592067 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592131 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592324 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592344 4718 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592359 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592369 4718 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592380 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592388 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592468 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592604 4718 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592624 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592639 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592658 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592673 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592684 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592696 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592708 4718 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592720 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592732 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592743 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592772 4718 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592788 4718 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592789 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592799 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592812 4718 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592823 4718 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592837 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592849 4718 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592861 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592873 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592884 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592895 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592909 4718 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592921 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592932 4718 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592944 4718 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592957 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592968 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592980 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.592991 4718 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593002 4718 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593013 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593025 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593036 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593049 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593060 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593072 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593083 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593129 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593141 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593155 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593168 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593179 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593189 4718 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593200 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593212 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593223 4718 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593233 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593244 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593255 4718 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593266 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593277 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593288 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593293 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593299 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593311 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593322 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593333 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593347 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593358 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593368 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593380 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593391 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593403 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593414 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593427 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593455 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593467 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593479 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593494 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593504 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593515 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593527 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593539 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593549 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593560 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593571 4718 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593582 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593592 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593603 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593615 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593626 4718 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593638 4718 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593651 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593662 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593675 4718 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593685 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593696 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593706 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593717 4718 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593727 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593737 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593749 4718 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593760 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593770 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593782 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593793 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593836 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593850 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593863 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593874 4718 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593884 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593895 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593906 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593916 4718 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593927 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593938 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593948 4718 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593960 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593970 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593981 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.593991 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594002 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594012 4718 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594022 4718 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594033 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594044 4718 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594055 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594065 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594075 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594085 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594096 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594107 4718 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594118 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594129 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594139 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594150 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594160 4718 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594170 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594180 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594192 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594202 4718 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594213 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594225 4718 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594237 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594248 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594259 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594272 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594283 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594294 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594305 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594316 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594326 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594339 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594349 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594360 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594372 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594382 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594393 4718 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594404 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594415 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594426 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594431 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594452 4718 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594490 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594525 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594537 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594577 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594590 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594601 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594613 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594624 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594636 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594649 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594660 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594671 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594682 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594694 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.594705 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.595074 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.596055 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.596614 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.597729 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.598271 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.599482 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.599943 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.600780 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.601238 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.601778 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.602730 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.603188 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.626799 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.627288 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.629526 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21" exitCode=255 Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.629563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21"} Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.629627 4718 scope.go:117] "RemoveContainer" containerID="7486a4a6af7b8defa077d6084d60dcb97b57489098baee4fabee8b20c2728661" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.641863 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.649013 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.649595 4718 scope.go:117] "RemoveContainer" containerID="d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21" Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.649854 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.652779 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.662014 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.662312 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.672339 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.675010 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.682136 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.685050 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 14:46:14 crc kubenswrapper[4718]: W1123 14:46:14.688054 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b5c427e8cb15a7183faa8ff4233d2cfc00c3cc85b2b38a5b2d67bdb1af00026f WatchSource:0}: Error finding container b5c427e8cb15a7183faa8ff4233d2cfc00c3cc85b2b38a5b2d67bdb1af00026f: Status 404 returned error can't find the container with id b5c427e8cb15a7183faa8ff4233d2cfc00c3cc85b2b38a5b2d67bdb1af00026f Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.692492 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:14 crc kubenswrapper[4718]: I1123 14:46:14.999155 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:14 crc kubenswrapper[4718]: E1123 14:46:14.999322 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:15.999306946 +0000 UTC m=+27.238926790 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.099516 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.099554 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.099572 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.099590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.099679 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.099724 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:16.099712001 +0000 UTC m=+27.339331845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.099925 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.099978 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:16.099963668 +0000 UTC m=+27.339583512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100062 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100082 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100095 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100161 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:16.100125522 +0000 UTC m=+27.339745366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100318 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100380 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100406 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.100557 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:16.100519832 +0000 UTC m=+27.340139736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.440639 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.440837 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.634961 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a"} Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.635021 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0bbcaef2216d4e71ef9af2f0bab98696f0baeea14caab5797f0c406988b88ab9"} Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.637288 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.641037 4718 scope.go:117] "RemoveContainer" containerID="d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21" Nov 23 14:46:15 crc kubenswrapper[4718]: E1123 14:46:15.641294 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.645047 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38"} Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.645101 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"224c3ef772035924820a302993c4239a36479da8c7c7551a2b36b27065740043"} Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.646569 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b5c427e8cb15a7183faa8ff4233d2cfc00c3cc85b2b38a5b2d67bdb1af00026f"} Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.654577 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.672907 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.689074 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.705936 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.720240 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.740169 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.766791 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.811936 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.826659 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.838872 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.850425 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.869235 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.880250 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:15 crc kubenswrapper[4718]: I1123 14:46:15.889486 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.009738 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.010019 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:18.009984385 +0000 UTC m=+29.249604259 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.112062 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.112139 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.112178 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.112217 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112301 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112348 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112374 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112399 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112458 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112495 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112510 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112516 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:18.112485644 +0000 UTC m=+29.352105528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112652 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:18.112548476 +0000 UTC m=+29.352168330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.112825 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:18.112774222 +0000 UTC m=+29.352394116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.113331 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.114425 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:18.114400092 +0000 UTC m=+29.354019946 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.441028 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.441081 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.441253 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:16 crc kubenswrapper[4718]: E1123 14:46:16.441402 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.448098 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.448892 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.451114 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.452480 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.476000 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.477149 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.478830 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.480167 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.652245 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39"} Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.674297 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.694274 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.710147 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.725124 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.744828 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.756638 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:16 crc kubenswrapper[4718]: I1123 14:46:16.767519 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 14:46:17 crc kubenswrapper[4718]: I1123 14:46:17.440725 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:17 crc kubenswrapper[4718]: E1123 14:46:17.440948 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.031855 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.032065 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:22.032024739 +0000 UTC m=+33.271644723 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.132658 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.132748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.132787 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.132823 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.132880 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.132920 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.132941 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.132948 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133018 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:22.13299651 +0000 UTC m=+33.372616394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133051 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133100 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133171 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133198 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133137 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:22.133044031 +0000 UTC m=+33.372663925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133264 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:22.133233095 +0000 UTC m=+33.372852979 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.133301 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:22.133282377 +0000 UTC m=+33.372902271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.141356 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.150021 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.161747 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.169981 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.191629 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.211565 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.237768 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.259659 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.281974 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.300860 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.320370 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.340768 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.365857 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.386129 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.413754 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.436939 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.440571 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.440776 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.441424 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:18 crc kubenswrapper[4718]: E1123 14:46:18.441570 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.461883 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:18 crc kubenswrapper[4718]: I1123 14:46:18.477252 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:19 crc kubenswrapper[4718]: I1123 14:46:19.440593 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:19 crc kubenswrapper[4718]: E1123 14:46:19.440753 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.440724 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.440862 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.440928 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.441091 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.460993 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.477316 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.494539 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.511337 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.528734 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.566050 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.609241 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.609965 4718 scope.go:117] "RemoveContainer" containerID="d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21" Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.610106 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.613034 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.618583 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.621116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.621197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.621218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.621321 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.628800 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.630433 4718 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.630839 4718 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.632211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.632250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.632259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.632272 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.632284 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.649910 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.654833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.654867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.654884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.654909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.654927 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.670972 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.674942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.674982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.674994 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.675009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.675022 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.689749 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.693476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.693509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.693519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.693533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.693544 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.707793 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.711875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.711941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.711958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.711982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.711998 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.726898 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:20 crc kubenswrapper[4718]: E1123 14:46:20.727149 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.730858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.730901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.730916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.730940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.730958 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.834307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.834356 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.834369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.834387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.834401 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.937564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.937618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.937630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.937649 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:20 crc kubenswrapper[4718]: I1123 14:46:20.937663 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:20Z","lastTransitionTime":"2025-11-23T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.040228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.040283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.040305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.040333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.040351 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.062910 4718 csr.go:261] certificate signing request csr-v6m9n is approved, waiting to be issued Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.089850 4718 csr.go:257] certificate signing request csr-v6m9n is issued Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.144005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.144079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.144131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.144162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.144187 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.246632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.246688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.246701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.246723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.246739 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.349285 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.349336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.349346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.349365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.349377 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.440371 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:21 crc kubenswrapper[4718]: E1123 14:46:21.440596 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.452234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.452275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.452288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.452316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.452330 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.555582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.555627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.555644 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.555664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.555675 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.659045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.659103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.659112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.659131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.659158 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.667136 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.681292 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.710665 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.734046 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.750272 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.761652 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.761705 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.761722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.761745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.761761 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.764318 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.781851 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.799169 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.815291 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.866073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.866137 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.866147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.866170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.866183 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.969099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.969140 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.969150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.969168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:21 crc kubenswrapper[4718]: I1123 14:46:21.969179 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:21Z","lastTransitionTime":"2025-11-23T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.073041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.073161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.073254 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.073277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.073289 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.075787 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.076042 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:30.075974007 +0000 UTC m=+41.315593891 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.091853 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-23 14:41:21 +0000 UTC, rotation deadline is 2026-09-04 19:23:32.09941593 +0000 UTC Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.091948 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6844h37m10.0074728s for next certificate rotation Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.168999 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qb66k"] Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.169380 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.174046 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-557f4"] Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.175217 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.176214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.176255 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.176299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.176322 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.176404 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.176479 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.176540 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:30.176524347 +0000 UTC m=+41.416144191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.176626 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.177004 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.177034 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.177045 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.177115 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.177133 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.177143 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.177177 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:30.177167172 +0000 UTC m=+41.416787016 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.183729 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kv78j"] Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.184549 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.184736 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.184954 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.185528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.185555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.185566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.185584 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.177050 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.185599 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.185730 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:30.185709588 +0000 UTC m=+41.425329432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.186416 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.186518 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:30.186491288 +0000 UTC m=+41.426111132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.188524 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.191239 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.191321 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.193416 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.197388 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.214731 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.260216 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276708 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276746 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-netns\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276765 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-multus-certs\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276790 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-os-release\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276805 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-socket-dir-parent\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276823 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-cni-multus\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276838 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-system-cni-dir\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.276859 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ncn\" (UniqueName: \"kubernetes.io/projected/49e539fc-7a1f-42e0-9a69-230331321d85-kube-api-access-g7ncn\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277042 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cnibin\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277137 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/49e539fc-7a1f-42e0-9a69-230331321d85-cni-binary-copy\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277163 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-cni-bin\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277184 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/49e539fc-7a1f-42e0-9a69-230331321d85-multus-daemon-config\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277354 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-k8s-cni-cncf-io\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx62w\" (UniqueName: \"kubernetes.io/projected/6d7996bb-8907-49ac-afb1-a8de8d2553c6-kube-api-access-rx62w\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277611 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-cnibin\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277645 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-conf-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277704 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbhwx\" (UniqueName: \"kubernetes.io/projected/6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6-kube-api-access-vbhwx\") pod \"node-resolver-kv78j\" (UID: \"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\") " pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277751 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-os-release\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277833 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277872 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-kubelet\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277895 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-etc-kubernetes\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277912 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277931 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6-hosts-file\") pod \"node-resolver-kv78j\" (UID: \"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\") " pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277950 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-cni-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277972 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-system-cni-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.277990 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-hostroot\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.281143 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.288262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.288292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.288301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.288315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.288324 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.297578 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.315674 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.328668 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.342849 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.357311 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.372666 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.378662 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/49e539fc-7a1f-42e0-9a69-230331321d85-cni-binary-copy\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.378733 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-cni-bin\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.378756 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/49e539fc-7a1f-42e0-9a69-230331321d85-multus-daemon-config\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.378805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-k8s-cni-cncf-io\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.378826 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx62w\" (UniqueName: \"kubernetes.io/projected/6d7996bb-8907-49ac-afb1-a8de8d2553c6-kube-api-access-rx62w\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.378851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-cnibin\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379256 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-conf-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbhwx\" (UniqueName: \"kubernetes.io/projected/6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6-kube-api-access-vbhwx\") pod \"node-resolver-kv78j\" (UID: \"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\") " pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379003 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-k8s-cni-cncf-io\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.378879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-cni-bin\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379334 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-conf-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-os-release\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-cnibin\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379596 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379712 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-os-release\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379720 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/49e539fc-7a1f-42e0-9a69-230331321d85-cni-binary-copy\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379758 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-kubelet\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379816 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-etc-kubernetes\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-kubelet\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379848 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379912 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-etc-kubernetes\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379952 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6-hosts-file\") pod \"node-resolver-kv78j\" (UID: \"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\") " pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.379978 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-cni-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380017 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-system-cni-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380021 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/49e539fc-7a1f-42e0-9a69-230331321d85-multus-daemon-config\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380059 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-hostroot\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380098 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-system-cni-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380037 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-hostroot\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380159 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6-hosts-file\") pod \"node-resolver-kv78j\" (UID: \"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\") " pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380210 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380282 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-netns\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380318 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-multus-certs\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-os-release\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380458 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380430 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-socket-dir-parent\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-cni-multus\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380528 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-socket-dir-parent\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-system-cni-dir\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-multus-cni-dir\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7ncn\" (UniqueName: \"kubernetes.io/projected/49e539fc-7a1f-42e0-9a69-230331321d85-kube-api-access-g7ncn\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-netns\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380616 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cnibin\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380645 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cnibin\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380700 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-run-multus-certs\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/49e539fc-7a1f-42e0-9a69-230331321d85-host-var-lib-cni-multus\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380770 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-os-release\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380797 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-system-cni-dir\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.380914 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d7996bb-8907-49ac-afb1-a8de8d2553c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.385958 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.391357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.391402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.391413 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.391450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.391464 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.399824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7ncn\" (UniqueName: \"kubernetes.io/projected/49e539fc-7a1f-42e0-9a69-230331321d85-kube-api-access-g7ncn\") pod \"multus-qb66k\" (UID: \"49e539fc-7a1f-42e0-9a69-230331321d85\") " pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.403480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx62w\" (UniqueName: \"kubernetes.io/projected/6d7996bb-8907-49ac-afb1-a8de8d2553c6-kube-api-access-rx62w\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.405102 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.407491 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbhwx\" (UniqueName: \"kubernetes.io/projected/6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6-kube-api-access-vbhwx\") pod \"node-resolver-kv78j\" (UID: \"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\") " pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.423186 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.436510 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.440236 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.440242 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.440424 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:22 crc kubenswrapper[4718]: E1123 14:46:22.440642 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.453559 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.470583 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.493834 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.494538 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.494578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.494590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.494604 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.494613 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.512231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.527491 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.543115 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.556374 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.576900 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qb66k" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.577838 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kv78j" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.593432 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zjskv"] Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.594402 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.596426 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hkdqw"] Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.600928 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.601003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.601022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.601047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.601071 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.601338 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.601598 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.602033 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.602168 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.602422 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.602495 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.602538 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.603891 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.605214 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.606526 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.606912 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.608110 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.608685 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.621689 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.637913 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.652990 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.668867 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683383 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-bin\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovn-node-metrics-cert\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683471 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-proxy-tls\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683489 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vx2\" (UniqueName: \"kubernetes.io/projected/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-kube-api-access-g6vx2\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683505 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-script-lib\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683522 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683557 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-env-overrides\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683587 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-node-log\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683611 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-ovn-kubernetes\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683637 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7dr\" (UniqueName: \"kubernetes.io/projected/aa4a9264-1cb9-41bc-a30a-4e09bde21387-kube-api-access-fv7dr\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683655 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-etc-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683682 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-slash\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683730 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-systemd-units\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-ovn\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683784 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-systemd\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683808 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-var-lib-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683831 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-rootfs\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683849 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-mcd-auth-proxy-config\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683868 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-netd\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683886 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683915 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-netns\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.683996 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-log-socket\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.684052 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-kubelet\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.684072 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-config\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.695154 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.703926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.704029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.704050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.704109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.704132 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.713584 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.726999 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.751191 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.760480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d7996bb-8907-49ac-afb1-a8de8d2553c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-557f4\" (UID: \"6d7996bb-8907-49ac-afb1-a8de8d2553c6\") " pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.772190 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-kubelet\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-config\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785675 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-bin\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785727 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovn-node-metrics-cert\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785780 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-proxy-tls\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785828 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vx2\" (UniqueName: \"kubernetes.io/projected/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-kube-api-access-g6vx2\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785878 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785922 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-script-lib\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-env-overrides\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786027 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7dr\" (UniqueName: \"kubernetes.io/projected/aa4a9264-1cb9-41bc-a30a-4e09bde21387-kube-api-access-fv7dr\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786044 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-bin\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786079 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-etc-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786136 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-node-log\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786150 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.785668 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-kubelet\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786278 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-node-log\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786375 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-etc-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786344 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-ovn-kubernetes\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786494 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-ovn-kubernetes\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786521 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-slash\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786580 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-ovn\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786638 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-slash\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786663 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-systemd-units\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786729 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-systemd-units\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786702 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-ovn\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786775 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-rootfs\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786811 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-rootfs\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786815 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-config\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786835 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-systemd\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-env-overrides\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786870 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-systemd\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786910 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-var-lib-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.786960 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-var-lib-openvswitch\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787159 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-mcd-auth-proxy-config\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787213 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-netd\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787349 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787394 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-netd\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787540 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-netns\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787356 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-script-lib\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-netns\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-log-socket\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.787750 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-log-socket\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.788302 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-mcd-auth-proxy-config\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.791340 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-proxy-tls\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.791840 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.792086 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovn-node-metrics-cert\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.810699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.810748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.810761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.810781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.810794 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.811057 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vx2\" (UniqueName: \"kubernetes.io/projected/c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785-kube-api-access-g6vx2\") pod \"machine-config-daemon-hkdqw\" (UID: \"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\") " pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.811104 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.820016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7dr\" (UniqueName: \"kubernetes.io/projected/aa4a9264-1cb9-41bc-a30a-4e09bde21387-kube-api-access-fv7dr\") pod \"ovnkube-node-zjskv\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.829332 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.844883 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.861756 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.875985 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.877216 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-557f4" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.889761 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.913720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.913800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.913816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.913871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.913891 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:22Z","lastTransitionTime":"2025-11-23T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.915472 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.932164 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.948065 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.964309 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.978837 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:22 crc kubenswrapper[4718]: I1123 14:46:22.990961 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.005671 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.016422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.016465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.016474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.016488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.016498 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.019473 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.032321 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.051931 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.052034 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:46:23 crc kubenswrapper[4718]: W1123 14:46:23.071916 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d9cfca_3d2e_42a8_9fd9_2d6c7772b785.slice/crio-ee5a46a5c77e755eabcce6ba979d56cea2e7d9fc84e3447e99b28cad3711f120 WatchSource:0}: Error finding container ee5a46a5c77e755eabcce6ba979d56cea2e7d9fc84e3447e99b28cad3711f120: Status 404 returned error can't find the container with id ee5a46a5c77e755eabcce6ba979d56cea2e7d9fc84e3447e99b28cad3711f120 Nov 23 14:46:23 crc kubenswrapper[4718]: W1123 14:46:23.081007 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa4a9264_1cb9_41bc_a30a_4e09bde21387.slice/crio-59aee2898c5a453dcf6f6f076db7a9ad00a9b668d73cc35b1bccb8d6f70dc33b WatchSource:0}: Error finding container 59aee2898c5a453dcf6f6f076db7a9ad00a9b668d73cc35b1bccb8d6f70dc33b: Status 404 returned error can't find the container with id 59aee2898c5a453dcf6f6f076db7a9ad00a9b668d73cc35b1bccb8d6f70dc33b Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.119108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.119149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.119159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.119176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.119189 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.221544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.221586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.221597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.221635 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.221650 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.329077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.329146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.329163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.329187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.329205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.432371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.432411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.432420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.432455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.432464 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.440950 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:23 crc kubenswrapper[4718]: E1123 14:46:23.441132 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.535874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.535934 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.535953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.535981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.535999 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.639593 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.639674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.639694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.639726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.639744 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.674176 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kv78j" event={"ID":"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6","Type":"ContainerStarted","Data":"e9be234676a0da6865525ad7734de4cb9780149a676801009f3059abed87f780"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.675468 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qb66k" event={"ID":"49e539fc-7a1f-42e0-9a69-230331321d85","Type":"ContainerStarted","Data":"3e96062ad5706c779a920986e99aa5f9d242dcaf17cd21e02778d1c0e25c188d"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.676743 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"59aee2898c5a453dcf6f6f076db7a9ad00a9b668d73cc35b1bccb8d6f70dc33b"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.677956 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"ee5a46a5c77e755eabcce6ba979d56cea2e7d9fc84e3447e99b28cad3711f120"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.679123 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerStarted","Data":"4163435e1ed7cf3be8cf4e73a3acacbd0901aa10dbd3cf7edb558591c8f37ec5"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.742282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.742334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.742349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.742374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.742393 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.844948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.845013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.845031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.845056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.845076 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.953036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.953116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.953134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.953159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:23 crc kubenswrapper[4718]: I1123 14:46:23.953171 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:23Z","lastTransitionTime":"2025-11-23T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.062876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.062947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.062965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.062990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.063007 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.169675 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.169738 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.169750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.169773 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.169788 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.273507 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.273550 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.273559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.273574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.273583 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.377231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.377290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.377306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.377332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.377352 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.440689 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.440753 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:24 crc kubenswrapper[4718]: E1123 14:46:24.440887 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:24 crc kubenswrapper[4718]: E1123 14:46:24.441049 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.480495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.480563 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.480579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.480607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.480637 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.486582 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-z75dw"] Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.487011 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.489299 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.490473 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.490773 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.491313 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.514405 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.531945 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.544580 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.559156 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.577868 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.584134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.584176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.584186 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.584205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.584217 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.596231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.608702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/72846732-1e66-4f5b-9b12-2a3a9bf21672-serviceca\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.608816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmt45\" (UniqueName: \"kubernetes.io/projected/72846732-1e66-4f5b-9b12-2a3a9bf21672-kube-api-access-wmt45\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.608917 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72846732-1e66-4f5b-9b12-2a3a9bf21672-host\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.610231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.635057 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.662610 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.681879 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.683825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.687275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.687332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.687347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.687375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.687397 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.688576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerStarted","Data":"7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.691495 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kv78j" event={"ID":"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6","Type":"ContainerStarted","Data":"77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.697539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qb66k" event={"ID":"49e539fc-7a1f-42e0-9a69-230331321d85","Type":"ContainerStarted","Data":"d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.699925 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14" exitCode=0 Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.699986 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.701708 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.710149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/72846732-1e66-4f5b-9b12-2a3a9bf21672-serviceca\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.710284 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmt45\" (UniqueName: \"kubernetes.io/projected/72846732-1e66-4f5b-9b12-2a3a9bf21672-kube-api-access-wmt45\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.710479 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72846732-1e66-4f5b-9b12-2a3a9bf21672-host\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.710596 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72846732-1e66-4f5b-9b12-2a3a9bf21672-host\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.711676 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/72846732-1e66-4f5b-9b12-2a3a9bf21672-serviceca\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.730566 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.747516 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.750111 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmt45\" (UniqueName: \"kubernetes.io/projected/72846732-1e66-4f5b-9b12-2a3a9bf21672-kube-api-access-wmt45\") pod \"node-ca-z75dw\" (UID: \"72846732-1e66-4f5b-9b12-2a3a9bf21672\") " pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.765967 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.780267 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.790871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.790927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.790954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.790974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.790985 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.800311 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.808159 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z75dw" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.822255 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: W1123 14:46:24.828566 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72846732_1e66_4f5b_9b12_2a3a9bf21672.slice/crio-66e0e4483c9e57ea0b7e5e705d9202f15719820d083cffbf99a04aec717738a8 WatchSource:0}: Error finding container 66e0e4483c9e57ea0b7e5e705d9202f15719820d083cffbf99a04aec717738a8: Status 404 returned error can't find the container with id 66e0e4483c9e57ea0b7e5e705d9202f15719820d083cffbf99a04aec717738a8 Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.841741 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.854000 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.870295 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.883912 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.893512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.893547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.893556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.893573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.893586 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.898753 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.917078 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.949232 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.963251 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.988150 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.997429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.997493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.997503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.997522 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:24 crc kubenswrapper[4718]: I1123 14:46:24.997535 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:24Z","lastTransitionTime":"2025-11-23T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.005095 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.019237 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.100830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.100899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.100910 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.101505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.101563 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.205329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.205381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.205391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.205412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.205422 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.310294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.310383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.310399 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.310424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.310465 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.413521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.413562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.413574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.413594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.413604 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.440612 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:25 crc kubenswrapper[4718]: E1123 14:46:25.440776 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.516522 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.516581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.516596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.516618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.516637 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.619521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.619585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.619598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.619621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.619632 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.707185 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.708707 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d7996bb-8907-49ac-afb1-a8de8d2553c6" containerID="7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53" exitCode=0 Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.708833 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerDied","Data":"7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.711299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z75dw" event={"ID":"72846732-1e66-4f5b-9b12-2a3a9bf21672","Type":"ContainerStarted","Data":"f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.711359 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z75dw" event={"ID":"72846732-1e66-4f5b-9b12-2a3a9bf21672","Type":"ContainerStarted","Data":"66e0e4483c9e57ea0b7e5e705d9202f15719820d083cffbf99a04aec717738a8"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.714046 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.714118 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.722277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.722316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.722331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.722348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.722361 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.726402 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.753262 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.767589 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.781282 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.798593 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.813428 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.829094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.829138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.829150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.829117 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.829170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.829422 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.848226 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.864928 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.879002 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.894610 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.911171 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.925471 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.941940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.942126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.942236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.942307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.942336 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:25Z","lastTransitionTime":"2025-11-23T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.962886 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:25 crc kubenswrapper[4718]: I1123 14:46:25.979336 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.001305 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.014429 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.029163 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.042310 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.045224 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.045377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.045484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.045569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.045629 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.057838 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.074184 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.089339 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.106992 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.121710 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.137899 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.149679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.149973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.150077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.150220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.150331 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.155779 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.169657 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.184465 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.252943 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.252984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.252993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.253005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.253014 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.355946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.356462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.356630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.356760 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.356876 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.440585 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.440589 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:26 crc kubenswrapper[4718]: E1123 14:46:26.440794 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:26 crc kubenswrapper[4718]: E1123 14:46:26.440901 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.460141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.460218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.460239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.460264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.460282 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.562525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.562788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.562930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.563068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.563186 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.667506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.669281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.669533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.669699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.669862 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.727797 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerStarted","Data":"3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.740708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.740777 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.740799 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.750416 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.772925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.772999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.773024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.773055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.773078 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.776399 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.796631 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.817309 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.837356 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.862786 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.877035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.877126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.877147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.877176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.877205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.887976 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.908544 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.928183 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.955195 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.974944 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.980265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.980333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.980351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.980378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.980396 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:26Z","lastTransitionTime":"2025-11-23T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:26 crc kubenswrapper[4718]: I1123 14:46:26.997782 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:26Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.024486 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:27Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.039646 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:27Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.084079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.084141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.084157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.084182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.084201 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.187273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.187322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.187334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.187351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.187364 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.290305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.290336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.290345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.290357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.290365 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.392912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.392972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.392988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.393010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.393025 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.440998 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:27 crc kubenswrapper[4718]: E1123 14:46:27.441184 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.495855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.495914 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.495926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.495950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.495966 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.598907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.598946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.598954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.598968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.598977 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.702166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.702212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.702227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.702244 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.702257 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.750640 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.805857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.805907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.805924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.805945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.805962 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.909094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.909131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.909142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.909158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:27 crc kubenswrapper[4718]: I1123 14:46:27.909168 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:27Z","lastTransitionTime":"2025-11-23T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.012695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.012749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.012765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.012788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.012805 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.116436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.116537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.116552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.116578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.116593 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.220588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.220650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.220666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.220689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.220706 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.323770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.323836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.323859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.323887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.323913 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.427768 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.427806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.427816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.427831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.427843 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.440608 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.440649 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:28 crc kubenswrapper[4718]: E1123 14:46:28.440796 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:28 crc kubenswrapper[4718]: E1123 14:46:28.440934 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.531035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.531410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.531617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.531833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.532009 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.634552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.634626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.634644 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.634669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.634685 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.737363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.737414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.737425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.737475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.737487 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.755961 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d7996bb-8907-49ac-afb1-a8de8d2553c6" containerID="3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1" exitCode=0 Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.756045 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerDied","Data":"3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.781898 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.803131 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.823493 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.839815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.839851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.839864 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.839886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.839897 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.850468 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.867233 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.883510 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.901941 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.914933 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.931024 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.943045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.943112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.943130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.943157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.943174 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:28Z","lastTransitionTime":"2025-11-23T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.947198 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.962921 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.977925 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:28 crc kubenswrapper[4718]: I1123 14:46:28.993815 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.007308 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.045348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.045405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.045423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.045473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.045491 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.149707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.149761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.149778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.149798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.149812 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.252534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.252574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.252585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.252601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.252612 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.355901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.355961 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.355978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.356004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.356021 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.440179 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:29 crc kubenswrapper[4718]: E1123 14:46:29.440640 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.459290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.459616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.459822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.459982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.460124 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.563551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.563947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.564139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.564284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.564430 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.667511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.667579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.667597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.667621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.667637 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.766135 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d7996bb-8907-49ac-afb1-a8de8d2553c6" containerID="9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816" exitCode=0 Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.766202 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerDied","Data":"9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.771108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.771142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.771155 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.771171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.771183 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.790629 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.813340 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.831102 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.851008 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.866358 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.874697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.875123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.875540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.875926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.876090 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.882397 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.900493 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.923785 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.929326 4718 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 23 14:46:29 crc kubenswrapper[4718]: E1123 14:46:29.929947 4718 request.go:1255] Unexpected error when reading response body: read tcp 38.102.83.2:35906->38.102.83.2:6443: use of closed network connection Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.933254 4718 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.2:35906->38.102.83.2:6443: use of closed network connection" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.976491 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.979539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.979571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.979582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.979600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.979613 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:29Z","lastTransitionTime":"2025-11-23T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:29 crc kubenswrapper[4718]: I1123 14:46:29.998037 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:29Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.038269 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.068536 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.082472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.082496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.082504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.082517 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.082525 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.086699 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.173407 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.173609 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:46.173576706 +0000 UTC m=+57.413196560 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.185620 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.185668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.185685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.185707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.185722 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.274325 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.274400 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.274491 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.274562 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.274676 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.274750 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:46.274728551 +0000 UTC m=+57.514348435 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.275359 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.275399 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.275409 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.275422 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.275555 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:46.275529471 +0000 UTC m=+57.515149355 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.275603 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:46.275586873 +0000 UTC m=+57.515206757 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.276024 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.276204 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.276360 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.276699 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:46:46.276605919 +0000 UTC m=+57.516225793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.289227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.289493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.289700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.289855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.289993 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.393291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.393347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.393364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.393387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.393406 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.440392 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.440685 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.441052 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.441264 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.458801 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.473005 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.495369 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.496305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.496364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.496381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.496403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.496418 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.513536 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.541874 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.556578 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.575285 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.591905 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.600051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.600107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.600123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.600146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.600164 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.611822 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.631378 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.649949 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.670803 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.692395 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.702880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.702939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.702963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.702988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.703005 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.709541 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.774332 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.805288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.805349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.805362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.805383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.805401 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.907767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.907823 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.907837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.907851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.907863 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.963472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.963868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.963976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.964103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.964204 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:30 crc kubenswrapper[4718]: E1123 14:46:30.989361 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.996515 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.996602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.996622 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.996669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:30 crc kubenswrapper[4718]: I1123 14:46:30.996693 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:30Z","lastTransitionTime":"2025-11-23T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: E1123 14:46:31.025206 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.030204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.030271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.030292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.030320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.030340 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: E1123 14:46:31.049918 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.057511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.057614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.057648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.057689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.057735 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: E1123 14:46:31.080572 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.085992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.086021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.086032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.086053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.086067 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: E1123 14:46:31.104591 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:31 crc kubenswrapper[4718]: E1123 14:46:31.104724 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.106869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.106919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.106931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.106951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.106964 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.210033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.210081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.210094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.210114 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.210127 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.313719 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.313774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.313785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.313807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.313821 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.416698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.416754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.416764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.416783 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.416795 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.440961 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:31 crc kubenswrapper[4718]: E1123 14:46:31.441226 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.520888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.520972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.520994 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.521025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.521050 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.626477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.626550 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.626572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.626605 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.626628 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.730034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.730110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.730136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.730171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.730197 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.795051 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerStarted","Data":"0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.833899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.834046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.834066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.834098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.834119 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.938712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.938796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.938821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.938868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:31 crc kubenswrapper[4718]: I1123 14:46:31.938894 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:31Z","lastTransitionTime":"2025-11-23T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.042726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.042797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.042822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.042857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.042881 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.146695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.146802 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.146825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.146869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.146888 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.250541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.250614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.250632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.250659 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.250678 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.354898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.354955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.354966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.354988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.355004 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.440425 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.440425 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:32 crc kubenswrapper[4718]: E1123 14:46:32.440899 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:32 crc kubenswrapper[4718]: E1123 14:46:32.441018 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.458287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.458348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.458375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.458408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.458470 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.561146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.561214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.561227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.561252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.561267 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.665039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.665391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.665403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.665421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.665433 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.770733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.770796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.770819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.770857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.770886 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.803658 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d7996bb-8907-49ac-afb1-a8de8d2553c6" containerID="0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f" exitCode=0 Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.803735 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerDied","Data":"0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.835688 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.853489 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.874645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.874752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.874777 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.874814 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.874840 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.881628 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.913869 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.927228 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.944304 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.967052 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.979126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.979161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.979172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.979189 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.979201 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:32Z","lastTransitionTime":"2025-11-23T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:32 crc kubenswrapper[4718]: I1123 14:46:32.984487 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.005200 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.030620 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.053972 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.074055 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.082784 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.082838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.082851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.082877 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.082890 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.090683 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.117829 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.186491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.187034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.187060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.187094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.187114 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.291724 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.291806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.291831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.291871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.291900 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.395277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.395351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.395375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.395404 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.395426 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.441053 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:33 crc kubenswrapper[4718]: E1123 14:46:33.441249 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.504954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.505036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.505055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.505081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.505100 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.608036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.608120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.608142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.608170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.608190 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.711087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.711152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.711170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.711196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.711213 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.816035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.816104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.816127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.816162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.816185 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.824388 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.824883 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.824927 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.831633 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerStarted","Data":"a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.849869 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.871293 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.885633 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.896187 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.912889 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.919116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.919181 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.919196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.919228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.919245 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:33Z","lastTransitionTime":"2025-11-23T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.933434 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.956403 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.975786 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:33 crc kubenswrapper[4718]: I1123 14:46:33.997539 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.016751 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.023474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.023520 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.023533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.023554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.023570 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.043398 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.069251 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.091022 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.112130 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.126729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.126781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.126838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.126865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.126888 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.146160 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.169620 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.201944 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.218819 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.229689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.229746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.229763 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.229791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.229809 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.238602 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.259896 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.286232 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.303016 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.321476 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.332971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.333051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.333077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.333109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.333130 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.343005 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.363037 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.380210 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.402735 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.423300 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.436043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.436097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.436116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.436140 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.436158 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.440687 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.440920 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:34 crc kubenswrapper[4718]: E1123 14:46:34.441059 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:34 crc kubenswrapper[4718]: E1123 14:46:34.441186 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.441392 4718 scope.go:117] "RemoveContainer" containerID="d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.447909 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.539583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.539649 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.539666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.539694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.539718 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.643130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.643218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.643237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.643266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.643292 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.746648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.746733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.746756 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.746785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.746811 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.837858 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.849782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.849852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.849870 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.849902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.849922 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.873274 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp"] Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.874171 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.878687 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.879340 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.904158 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.924758 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.930698 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.948987 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.955953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.955997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.956008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.956026 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.956038 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:34Z","lastTransitionTime":"2025-11-23T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.972176 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:34 crc kubenswrapper[4718]: I1123 14:46:34.996490 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.010678 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.025803 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.034259 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad989470-f62f-4c09-a038-600445a0bef9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.034332 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad989470-f62f-4c09-a038-600445a0bef9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.034497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad989470-f62f-4c09-a038-600445a0bef9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.034531 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2mwp\" (UniqueName: \"kubernetes.io/projected/ad989470-f62f-4c09-a038-600445a0bef9-kube-api-access-j2mwp\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.038777 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.054403 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.060333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.060397 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.060421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.060491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.060517 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.074416 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.092794 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.111756 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.129687 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.135803 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad989470-f62f-4c09-a038-600445a0bef9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.135881 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad989470-f62f-4c09-a038-600445a0bef9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.135980 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad989470-f62f-4c09-a038-600445a0bef9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.136028 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2mwp\" (UniqueName: \"kubernetes.io/projected/ad989470-f62f-4c09-a038-600445a0bef9-kube-api-access-j2mwp\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.137023 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad989470-f62f-4c09-a038-600445a0bef9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.137029 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad989470-f62f-4c09-a038-600445a0bef9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.143079 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad989470-f62f-4c09-a038-600445a0bef9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.155001 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2mwp\" (UniqueName: \"kubernetes.io/projected/ad989470-f62f-4c09-a038-600445a0bef9-kube-api-access-j2mwp\") pod \"ovnkube-control-plane-749d76644c-5tcvp\" (UID: \"ad989470-f62f-4c09-a038-600445a0bef9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.162632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.162665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.162674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.162695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.162704 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.165390 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.179465 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.193699 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.208228 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.222133 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.232709 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.240601 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.244669 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: W1123 14:46:35.265677 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad989470_f62f_4c09_a038_600445a0bef9.slice/crio-49351431651d4d82fd05eeab3cd5a2259bad43ef3cd89b505f0d57e6797ae976 WatchSource:0}: Error finding container 49351431651d4d82fd05eeab3cd5a2259bad43ef3cd89b505f0d57e6797ae976: Status 404 returned error can't find the container with id 49351431651d4d82fd05eeab3cd5a2259bad43ef3cd89b505f0d57e6797ae976 Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.265831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.265889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.265899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.265918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.265951 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.266340 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.284490 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.302487 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.316278 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.338389 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.351732 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.364833 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.368573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.368602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.368614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.368631 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.368643 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.381289 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.398590 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.412582 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.440384 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:35 crc kubenswrapper[4718]: E1123 14:46:35.440562 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.472032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.472081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.472094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.472111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.472124 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.575828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.575892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.575906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.575936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.575951 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.682086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.682673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.682693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.682722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.682741 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.785721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.785793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.785809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.785837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.785853 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.846782 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d7996bb-8907-49ac-afb1-a8de8d2553c6" containerID="a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce" exitCode=0 Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.846887 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerDied","Data":"a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.849930 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.853539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.854093 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.856293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" event={"ID":"ad989470-f62f-4c09-a038-600445a0bef9","Type":"ContainerStarted","Data":"49351431651d4d82fd05eeab3cd5a2259bad43ef3cd89b505f0d57e6797ae976"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.873564 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.888662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.888726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.888745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.888771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.888790 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.898962 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.922532 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.944863 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.971707 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.995957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.996013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.996022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.996039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:35 crc kubenswrapper[4718]: I1123 14:46:35.996054 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:35Z","lastTransitionTime":"2025-11-23T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.005553 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.034422 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.052305 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.068418 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.086681 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.098423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.098487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.098498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.098516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.098527 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.105568 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.127462 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.144143 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.162885 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.184385 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.205073 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.206192 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.206271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.206287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.206306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.206320 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.226225 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.248341 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.268134 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.309236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.309286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.309301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.309322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.309338 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.328198 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.361865 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.377670 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.388626 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7qh4j"] Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.389146 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:36 crc kubenswrapper[4718]: E1123 14:46:36.389208 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.392413 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.406648 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.411344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.411368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.411377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.411390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.411399 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.415745 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.426239 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.440477 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:36 crc kubenswrapper[4718]: E1123 14:46:36.440587 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.440744 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:36 crc kubenswrapper[4718]: E1123 14:46:36.440954 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.441212 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.456528 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.467647 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.477220 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.487881 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.502726 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.513757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.513967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.514105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.514246 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.514385 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.516691 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.534026 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.548562 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.552009 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96ff9\" (UniqueName: \"kubernetes.io/projected/c82c8ca1-7a30-47c8-a679-abe265aca15b-kube-api-access-96ff9\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.552079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.568790 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.599589 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.615198 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.617110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.617157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.617171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.617190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.617205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.632750 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.652204 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.652844 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.652979 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96ff9\" (UniqueName: \"kubernetes.io/projected/c82c8ca1-7a30-47c8-a679-abe265aca15b-kube-api-access-96ff9\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:36 crc kubenswrapper[4718]: E1123 14:46:36.653307 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:36 crc kubenswrapper[4718]: E1123 14:46:36.653393 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs podName:c82c8ca1-7a30-47c8-a679-abe265aca15b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:37.153370061 +0000 UTC m=+48.392989915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs") pod "network-metrics-daemon-7qh4j" (UID: "c82c8ca1-7a30-47c8-a679-abe265aca15b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.675843 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.703642 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.720987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.721048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.721071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.721100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.721122 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.725064 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.731047 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96ff9\" (UniqueName: \"kubernetes.io/projected/c82c8ca1-7a30-47c8-a679-abe265aca15b-kube-api-access-96ff9\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.742210 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.773735 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.788117 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:36Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.825185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.825247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.825257 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.825278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.825291 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.862927 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" event={"ID":"ad989470-f62f-4c09-a038-600445a0bef9","Type":"ContainerStarted","Data":"f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.867713 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerStarted","Data":"53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa"} Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.928514 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.928594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.928615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.928650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:36 crc kubenswrapper[4718]: I1123 14:46:36.928670 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:36Z","lastTransitionTime":"2025-11-23T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.031573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.031639 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.031656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.031680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.031698 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.134100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.134292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.134382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.134536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.134636 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.159929 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:37 crc kubenswrapper[4718]: E1123 14:46:37.160234 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:37 crc kubenswrapper[4718]: E1123 14:46:37.160377 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs podName:c82c8ca1-7a30-47c8-a679-abe265aca15b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:38.160344467 +0000 UTC m=+49.399964351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs") pod "network-metrics-daemon-7qh4j" (UID: "c82c8ca1-7a30-47c8-a679-abe265aca15b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.237410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.237501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.237521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.237547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.237564 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.340588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.340651 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.340671 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.340696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.340716 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.441051 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:37 crc kubenswrapper[4718]: E1123 14:46:37.441333 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.443602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.443663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.443685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.443713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.443738 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.546921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.546969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.546987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.547007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.547021 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.650203 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.650260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.650276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.650300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.650324 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.753600 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.753915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.754072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.754238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.754384 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.858092 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.858186 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.858205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.858236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.858258 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.874816 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" event={"ID":"ad989470-f62f-4c09-a038-600445a0bef9","Type":"ContainerStarted","Data":"e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.879686 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d7996bb-8907-49ac-afb1-a8de8d2553c6" containerID="53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa" exitCode=0 Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.879746 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerDied","Data":"53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.895728 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:37Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.922326 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:37Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.935508 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:37Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.948427 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:37Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.959811 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:37Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.960397 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.960423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.960431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.960457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.960473 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:37Z","lastTransitionTime":"2025-11-23T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.975396 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:37Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:37 crc kubenswrapper[4718]: I1123 14:46:37.991294 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:37Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.010941 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.030800 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.052022 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.064595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.064677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.064700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.064730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.064755 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.070616 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.089667 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.111345 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.133010 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.153259 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.168558 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.168599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.168607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.168623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.168633 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.169559 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:38 crc kubenswrapper[4718]: E1123 14:46:38.169891 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:38 crc kubenswrapper[4718]: E1123 14:46:38.170027 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs podName:c82c8ca1-7a30-47c8-a679-abe265aca15b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:40.17000001 +0000 UTC m=+51.409619894 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs") pod "network-metrics-daemon-7qh4j" (UID: "c82c8ca1-7a30-47c8-a679-abe265aca15b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.170764 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.191271 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.211030 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.226929 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.247769 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.271153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.271222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.271250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.271280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.271301 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.281425 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.299375 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.320748 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.343055 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.367881 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.374254 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.374322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.374341 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.374366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.374385 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.384430 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.404412 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.423860 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.442291 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.442470 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:38 crc kubenswrapper[4718]: E1123 14:46:38.442596 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.442620 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:38 crc kubenswrapper[4718]: E1123 14:46:38.442774 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:38 crc kubenswrapper[4718]: E1123 14:46:38.442904 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.443732 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.459416 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.477363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.477414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.477426 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.477469 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.477489 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.477589 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.493084 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.580151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.580219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.580238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.580264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.580284 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.683667 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.683740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.683757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.683782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.683800 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.786983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.787023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.787038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.787061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.787077 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.889747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.889839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.889853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.889902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.889919 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.993663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.993744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.993759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.993788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:38 crc kubenswrapper[4718]: I1123 14:46:38.993805 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:38Z","lastTransitionTime":"2025-11-23T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.096346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.096476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.096500 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.096531 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.096558 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.200942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.201014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.201028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.201053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.201068 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.304371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.304454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.304467 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.304489 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.304499 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.408069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.408104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.408115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.408130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.408141 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.440688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:39 crc kubenswrapper[4718]: E1123 14:46:39.440888 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.510619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.510678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.510697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.510722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.510739 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.613397 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.613514 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.613537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.613566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.613586 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.716342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.716398 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.716414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.716436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.716492 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.818786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.818845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.818861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.818884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.818900 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.892913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" event={"ID":"6d7996bb-8907-49ac-afb1-a8de8d2553c6","Type":"ContainerStarted","Data":"053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.914285 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:39Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.927518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.927583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.927601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.927656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.927674 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:39Z","lastTransitionTime":"2025-11-23T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.935816 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:39Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.955476 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:39Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:39 crc kubenswrapper[4718]: I1123 14:46:39.977424 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:39Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.009413 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.027356 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.030699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.030745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.030764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.030789 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.030808 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.047866 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.071593 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.098578 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.116609 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.134246 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.134310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.134334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.134365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.134399 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.140677 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.162145 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.195022 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:40 crc kubenswrapper[4718]: E1123 14:46:40.195234 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:40 crc kubenswrapper[4718]: E1123 14:46:40.195335 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs podName:c82c8ca1-7a30-47c8-a679-abe265aca15b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:44.195311938 +0000 UTC m=+55.434931792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs") pod "network-metrics-daemon-7qh4j" (UID: "c82c8ca1-7a30-47c8-a679-abe265aca15b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.196376 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.213136 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.232918 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.236776 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.236826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.236838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.236856 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.236869 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.253206 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.339889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.339940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.339950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.339969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.339985 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.440402 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.440513 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.440578 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:40 crc kubenswrapper[4718]: E1123 14:46:40.440664 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:40 crc kubenswrapper[4718]: E1123 14:46:40.440806 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:40 crc kubenswrapper[4718]: E1123 14:46:40.441035 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.442512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.442576 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.442599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.442624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.442644 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.460312 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.481827 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.507104 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.516259 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.525160 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.530007 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.543611 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.545394 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.545433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.545466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.545485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.545497 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.558851 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.575167 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.594928 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.612363 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.633366 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.648055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.648115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.648136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.648167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.648190 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.661618 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.678229 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.693019 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.709839 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.727810 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.749953 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.751344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.751553 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.751741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.751892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.752030 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.766313 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.796228 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.821693 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.834964 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.856833 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.861048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.861083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.861100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.861124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.861143 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.872606 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.891508 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.903635 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.945820 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.966324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.966355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.966364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.966378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.966388 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:40Z","lastTransitionTime":"2025-11-23T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.968428 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:40 crc kubenswrapper[4718]: I1123 14:46:40.988827 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:40Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.003808 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.013740 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.026371 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.041658 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.055451 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.069064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.069096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.069107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.069121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.069130 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.070281 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.171998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.172063 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.172073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.172100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.172118 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.281695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.281767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.281785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.281812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.281830 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.284899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.284982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.285008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.285038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.285061 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: E1123 14:46:41.307621 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.312714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.312804 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.312826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.312849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.312870 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: E1123 14:46:41.333156 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.337933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.337990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.338009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.338035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.338052 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: E1123 14:46:41.358597 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.363634 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.363693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.363708 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.363734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.363753 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: E1123 14:46:41.384855 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.390872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.390930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.390947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.390972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.390991 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: E1123 14:46:41.413697 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: E1123 14:46:41.413930 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.416278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.416393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.416413 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.416492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.416527 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.440225 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:41 crc kubenswrapper[4718]: E1123 14:46:41.440418 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.520591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.520657 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.520675 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.520701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.520717 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.624301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.624368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.624385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.624410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.624427 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.727510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.727569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.727585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.727611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.727628 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.829798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.829868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.829886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.829911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.829930 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.905433 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/0.log" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.909802 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46" exitCode=1 Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.909848 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.911193 4718 scope.go:117] "RemoveContainer" containerID="ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.932947 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.935702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.935758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.935775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.935800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.935818 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:41Z","lastTransitionTime":"2025-11-23T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.952204 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.974128 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:41 crc kubenswrapper[4718]: I1123 14:46:41.999522 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.016390 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.040650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.040714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.040731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.040767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.040785 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.040932 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.073113 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\":140\\\\nI1123 14:46:41.043110 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI1123 14:46:41.043119 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPPod\\\\nI1123 14:46:41.043124 5993 services_controller.go:183] Shutting down controller ovn-lb-controller for network=default\\\\nI1123 14:46:41.043141 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.NetworkPolicy\\\\nI1123 14:46:41.043155 5993 admin_network_policy_controller.go:307] Shutting down controller default-network-controller\\\\nI1123 14:46:41.043165 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI1123 14:46:41.043129 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI1123 14:46:41.043155 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI1123 14:46:41.043201 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Pod\\\\nI1123 14:46:41.040433 5993 egressqos.go:301] Shutting down EgressQoS controller\\\\nI1123 14:46:41.043655 5993 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.094347 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.114128 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.134493 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.143575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.143681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.143735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.143762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.143779 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.159797 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.177231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.202570 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.225908 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.245418 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.248603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.248671 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.248688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.248713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.248731 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.271810 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.288836 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.351663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.351727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.351745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.351769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.351790 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.440662 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.440726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.440740 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:42 crc kubenswrapper[4718]: E1123 14:46:42.440885 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:42 crc kubenswrapper[4718]: E1123 14:46:42.441124 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:42 crc kubenswrapper[4718]: E1123 14:46:42.441327 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.479079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.479148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.479172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.479201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.479222 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.582344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.582473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.582503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.582539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.582563 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.685566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.685635 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.685652 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.685677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.685705 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.788662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.788716 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.788729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.788749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.788765 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.891782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.891850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.891864 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.891887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.891901 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.915737 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/0.log" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.919449 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62"} Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.919928 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.936280 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.956135 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.982950 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.995805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.995871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.995895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.995922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:42 crc kubenswrapper[4718]: I1123 14:46:42.995942 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:42Z","lastTransitionTime":"2025-11-23T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.006818 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.031008 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.052204 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.089019 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\":140\\\\nI1123 14:46:41.043110 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI1123 14:46:41.043119 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPPod\\\\nI1123 14:46:41.043124 5993 services_controller.go:183] Shutting down controller ovn-lb-controller for network=default\\\\nI1123 14:46:41.043141 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.NetworkPolicy\\\\nI1123 14:46:41.043155 5993 admin_network_policy_controller.go:307] Shutting down controller default-network-controller\\\\nI1123 14:46:41.043165 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI1123 14:46:41.043129 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI1123 14:46:41.043155 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI1123 14:46:41.043201 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Pod\\\\nI1123 14:46:41.040433 5993 egressqos.go:301] Shutting down EgressQoS controller\\\\nI1123 14:46:41.043655 5993 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.100196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.100322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.100390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.100426 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.100505 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.109021 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.126066 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.144557 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.161829 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.178300 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.191603 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.203512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.203561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.203586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.203616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.203640 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.210750 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.230679 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.245120 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.258254 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.307106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.307202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.307221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.307246 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.307299 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.410098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.410167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.410184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.410208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.410226 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.440764 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:43 crc kubenswrapper[4718]: E1123 14:46:43.440910 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.513673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.513760 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.513777 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.513803 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.513821 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.616817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.616893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.616918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.616946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.616967 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.720967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.721048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.721068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.721093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.721111 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.825881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.825936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.825952 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.825977 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.825995 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.925795 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/1.log" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.926927 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/0.log" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.928153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.928224 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.928250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.928283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.928307 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:43Z","lastTransitionTime":"2025-11-23T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.931812 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62" exitCode=1 Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.931878 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62"} Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.931929 4718 scope.go:117] "RemoveContainer" containerID="ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.933655 4718 scope.go:117] "RemoveContainer" containerID="e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62" Nov 23 14:46:43 crc kubenswrapper[4718]: E1123 14:46:43.937159 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.959178 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:43 crc kubenswrapper[4718]: I1123 14:46:43.979816 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.003599 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.020172 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.032025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.032112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.032133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.032157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.032205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.040872 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.062889 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.082144 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.101159 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.118805 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.136288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.136361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.136387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.136418 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.136481 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.143254 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.161034 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.180207 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.200043 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.216671 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.235019 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.239774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.239840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.239866 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.239897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.239919 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.243850 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:44 crc kubenswrapper[4718]: E1123 14:46:44.244050 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:44 crc kubenswrapper[4718]: E1123 14:46:44.244167 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs podName:c82c8ca1-7a30-47c8-a679-abe265aca15b nodeName:}" failed. No retries permitted until 2025-11-23 14:46:52.244134849 +0000 UTC m=+63.483754733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs") pod "network-metrics-daemon-7qh4j" (UID: "c82c8ca1-7a30-47c8-a679-abe265aca15b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.265915 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba76a791afd5dfe5aa42dc13eb35db936bcc7029079bb38e0f81a2199c2fba46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:41Z\\\",\\\"message\\\":\\\":140\\\\nI1123 14:46:41.043110 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressNode\\\\nI1123 14:46:41.043119 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPPod\\\\nI1123 14:46:41.043124 5993 services_controller.go:183] Shutting down controller ovn-lb-controller for network=default\\\\nI1123 14:46:41.043141 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.NetworkPolicy\\\\nI1123 14:46:41.043155 5993 admin_network_policy_controller.go:307] Shutting down controller default-network-controller\\\\nI1123 14:46:41.043165 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI1123 14:46:41.043129 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI1123 14:46:41.043155 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI1123 14:46:41.043201 5993 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Pod\\\\nI1123 14:46:41.040433 5993 egressqos.go:301] Shutting down EgressQoS controller\\\\nI1123 14:46:41.043655 5993 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:43Z\\\",\\\"message\\\":\\\"8519615025667110816) with []\\\\nI1123 14:46:43.133839 6272 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1123 14:46:43.133901 6272 factory.go:1336] Added *v1.Node event handler 7\\\\nI1123 14:46:43.133942 6272 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1123 14:46:43.134325 6272 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1123 14:46:43.134429 6272 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1123 14:46:43.134503 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 14:46:43.134553 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 14:46:43.134597 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 14:46:43.134614 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 14:46:43.134636 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 14:46:43.134661 6272 factory.go:656] Stopping watch factory\\\\nI1123 14:46:43.134676 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 14:46:43.134687 6272 ovnkube.go:599] Stopped ovnkube\\\\nI1123 14:46:43.134724 6272 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 14:46:43.134824 6272 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.281420 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.343652 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.343759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.343781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.343809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.343833 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.440873 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.440977 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.440986 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:44 crc kubenswrapper[4718]: E1123 14:46:44.441409 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:44 crc kubenswrapper[4718]: E1123 14:46:44.441584 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:44 crc kubenswrapper[4718]: E1123 14:46:44.441296 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.448951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.448998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.449010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.449031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.449047 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.552295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.552359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.552376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.552403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.552419 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.660804 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.660851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.660867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.660891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.660906 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.765833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.765881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.765892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.765911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.765923 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.868837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.868889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.868906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.868928 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.868946 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.939121 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/1.log" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.945291 4718 scope.go:117] "RemoveContainer" containerID="e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62" Nov 23 14:46:44 crc kubenswrapper[4718]: E1123 14:46:44.945595 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.966792 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:44Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.972535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.972592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.972610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.972635 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:44 crc kubenswrapper[4718]: I1123 14:46:44.972652 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:44Z","lastTransitionTime":"2025-11-23T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.013694 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:43Z\\\",\\\"message\\\":\\\"8519615025667110816) with []\\\\nI1123 14:46:43.133839 6272 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1123 14:46:43.133901 6272 factory.go:1336] Added *v1.Node event handler 7\\\\nI1123 14:46:43.133942 6272 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1123 14:46:43.134325 6272 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1123 14:46:43.134429 6272 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1123 14:46:43.134503 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 14:46:43.134553 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 14:46:43.134597 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 14:46:43.134614 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 14:46:43.134636 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 14:46:43.134661 6272 factory.go:656] Stopping watch factory\\\\nI1123 14:46:43.134676 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 14:46:43.134687 6272 ovnkube.go:599] Stopped ovnkube\\\\nI1123 14:46:43.134724 6272 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 14:46:43.134824 6272 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.032421 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.050905 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.076751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.076817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.076838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.076863 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.076825 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.076879 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.101271 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.119641 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.142649 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.164225 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.180199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.180297 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.180319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.180381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.180398 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.183632 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.201564 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.216233 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.242640 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.262756 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.274540 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.283368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.283429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.283464 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.283491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.283509 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.290071 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.303934 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.386875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.386917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.386925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.386942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.386952 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.440936 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:45 crc kubenswrapper[4718]: E1123 14:46:45.441152 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.490318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.490369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.490386 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.490407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.490426 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.593242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.593324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.593370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.593395 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.593415 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.696228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.696302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.696320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.696346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.696366 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.799954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.800014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.800032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.800057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.800074 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.903630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.903687 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.903703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.903740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:45 crc kubenswrapper[4718]: I1123 14:46:45.903757 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:45Z","lastTransitionTime":"2025-11-23T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.006689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.006769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.006788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.006815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.006833 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.110211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.110293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.110321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.110376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.110394 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.213110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.213161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.213178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.213200 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.213217 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.266613 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.266971 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:18.266924733 +0000 UTC m=+89.506544627 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.315980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.316039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.316056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.316079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.316095 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.368034 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.368106 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.368148 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.368204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368271 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368311 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368358 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368379 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368420 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368318 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368324 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368599 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368482 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:47:18.368427087 +0000 UTC m=+89.608046971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368691 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:47:18.368659723 +0000 UTC m=+89.608279607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368714 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:47:18.368702474 +0000 UTC m=+89.608322358 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.368750 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:47:18.368735435 +0000 UTC m=+89.608355319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.419011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.419186 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.419222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.419253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.419277 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.440736 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.440902 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.441150 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.441228 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.441415 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:46 crc kubenswrapper[4718]: E1123 14:46:46.441594 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.522481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.522547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.522565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.522589 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.522606 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.626044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.626128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.626151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.626180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.626202 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.729106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.729170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.729187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.729263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.729281 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.832488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.832551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.832567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.832589 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.832608 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.935418 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.935535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.935568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.935596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:46 crc kubenswrapper[4718]: I1123 14:46:46.935614 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:46Z","lastTransitionTime":"2025-11-23T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.038260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.038326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.038345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.038368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.038387 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.141784 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.141862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.141884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.141917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.141941 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.246340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.246398 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.246416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.246472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.246492 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.349638 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.349904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.349921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.349947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.349965 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.440827 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:47 crc kubenswrapper[4718]: E1123 14:46:47.441030 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.452993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.453058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.453075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.453101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.453121 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.556591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.556656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.556673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.556698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.556718 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.659963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.660126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.660152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.660184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.660205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.764361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.764432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.764486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.764513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.764534 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.868409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.868509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.868528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.868557 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.868576 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.971808 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.971887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.971912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.971947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:47 crc kubenswrapper[4718]: I1123 14:46:47.971967 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:47Z","lastTransitionTime":"2025-11-23T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.075207 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.075247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.075256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.075271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.075281 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.179073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.179153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.179174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.179205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.179227 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.282393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.282521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.282546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.282577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.282600 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.386134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.386194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.386212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.386235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.386252 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.440707 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.440795 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.440839 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:48 crc kubenswrapper[4718]: E1123 14:46:48.440939 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:48 crc kubenswrapper[4718]: E1123 14:46:48.441065 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:48 crc kubenswrapper[4718]: E1123 14:46:48.441277 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.489190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.489248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.489266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.489293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.489310 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.592555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.592646 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.592665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.592690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.592708 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.718037 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.718105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.718122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.718146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.718164 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.821633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.821689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.821700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.821721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.821734 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.924947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.924998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.925007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.925027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:48 crc kubenswrapper[4718]: I1123 14:46:48.925037 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:48Z","lastTransitionTime":"2025-11-23T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.028118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.028183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.028203 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.028231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.028251 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.131627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.131712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.131730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.131756 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.131773 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.235218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.235309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.235330 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.235356 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.235375 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.339847 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.339919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.339936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.339961 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.339980 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.440100 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:49 crc kubenswrapper[4718]: E1123 14:46:49.440370 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.443325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.443376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.443394 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.443421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.443472 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.547337 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.547401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.547417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.547513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.547539 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.650584 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.650643 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.650660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.650683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.650700 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.753670 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.753733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.753751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.753778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.753796 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.860595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.861281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.861318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.861345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.861363 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.964109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.964170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.964189 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.964215 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:49 crc kubenswrapper[4718]: I1123 14:46:49.964232 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:49Z","lastTransitionTime":"2025-11-23T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.066544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.066606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.066625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.066650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.066668 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.170909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.170972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.170989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.171016 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.171035 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.274359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.274408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.274420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.274463 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.274476 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.377265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.377312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.377328 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.377350 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.377366 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.440046 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.440161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.440289 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:50 crc kubenswrapper[4718]: E1123 14:46:50.440401 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:50 crc kubenswrapper[4718]: E1123 14:46:50.440687 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:50 crc kubenswrapper[4718]: E1123 14:46:50.441607 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.463948 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.480594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.480658 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.480673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.480697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.480713 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.495412 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:43Z\\\",\\\"message\\\":\\\"8519615025667110816) with []\\\\nI1123 14:46:43.133839 6272 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1123 14:46:43.133901 6272 factory.go:1336] Added *v1.Node event handler 7\\\\nI1123 14:46:43.133942 6272 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1123 14:46:43.134325 6272 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1123 14:46:43.134429 6272 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1123 14:46:43.134503 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 14:46:43.134553 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 14:46:43.134597 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 14:46:43.134614 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 14:46:43.134636 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 14:46:43.134661 6272 factory.go:656] Stopping watch factory\\\\nI1123 14:46:43.134676 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 14:46:43.134687 6272 ovnkube.go:599] Stopped ovnkube\\\\nI1123 14:46:43.134724 6272 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 14:46:43.134824 6272 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.512898 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.531373 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.549946 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.573707 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.588633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.588722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.588775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.588813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.588865 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.588831 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.608222 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.624350 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.642876 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.657391 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.670619 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.688115 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.692748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.692799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.692822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.692849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.692872 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.702636 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.721803 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.740929 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.757556 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:50Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.795236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.795295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.795311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.795345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.795364 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.898371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.898472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.898491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.898516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:50 crc kubenswrapper[4718]: I1123 14:46:50.898535 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:50Z","lastTransitionTime":"2025-11-23T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.001083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.001168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.001193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.001220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.001238 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.104194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.104257 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.104274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.104299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.104316 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.207134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.207198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.207214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.207240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.207260 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.309337 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.309400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.309417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.309468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.309487 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.411952 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.412033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.412059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.412089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.412115 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.440544 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:51 crc kubenswrapper[4718]: E1123 14:46:51.440810 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.515584 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.515662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.515684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.515713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.515738 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.619201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.619264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.619281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.619305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.619321 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.722952 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.723051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.723085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.723117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.723138 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.809960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.810078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.810097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.810157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.810182 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: E1123 14:46:51.832059 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:51Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.837299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.837350 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.837367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.837389 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.837410 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: E1123 14:46:51.857221 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:51Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.865783 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.865822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.865832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.865846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.865856 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: E1123 14:46:51.880058 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:51Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.883791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.883848 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.883868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.883893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.883910 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: E1123 14:46:51.897755 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:51Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.902249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.902304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.902323 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.902377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.902401 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: E1123 14:46:51.922645 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:51Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:51 crc kubenswrapper[4718]: E1123 14:46:51.922875 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.924822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.924872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.924888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.924908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.924924 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:51Z","lastTransitionTime":"2025-11-23T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.955699 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.974053 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:51Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:51 crc kubenswrapper[4718]: I1123 14:46:51.991050 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:51Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.008953 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.026057 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.028557 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.028603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.028617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.028638 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.028651 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.043925 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.059390 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.080200 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:43Z\\\",\\\"message\\\":\\\"8519615025667110816) with []\\\\nI1123 14:46:43.133839 6272 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1123 14:46:43.133901 6272 factory.go:1336] Added *v1.Node event handler 7\\\\nI1123 14:46:43.133942 6272 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1123 14:46:43.134325 6272 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1123 14:46:43.134429 6272 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1123 14:46:43.134503 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 14:46:43.134553 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 14:46:43.134597 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 14:46:43.134614 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 14:46:43.134636 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 14:46:43.134661 6272 factory.go:656] Stopping watch factory\\\\nI1123 14:46:43.134676 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 14:46:43.134687 6272 ovnkube.go:599] Stopped ovnkube\\\\nI1123 14:46:43.134724 6272 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 14:46:43.134824 6272 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.098716 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.114389 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.132217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.132289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.132301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.132324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.132360 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.133773 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.155348 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.169887 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.189431 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.210666 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.228225 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.235570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.235765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.235917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.236041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.236155 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.246709 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.262804 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:52Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.282588 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:52 crc kubenswrapper[4718]: E1123 14:46:52.282839 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:52 crc kubenswrapper[4718]: E1123 14:46:52.282945 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs podName:c82c8ca1-7a30-47c8-a679-abe265aca15b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:08.282915303 +0000 UTC m=+79.522535187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs") pod "network-metrics-daemon-7qh4j" (UID: "c82c8ca1-7a30-47c8-a679-abe265aca15b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.339227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.339286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.339305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.339331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.339348 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.440494 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.440504 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:52 crc kubenswrapper[4718]: E1123 14:46:52.441036 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.440548 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:52 crc kubenswrapper[4718]: E1123 14:46:52.441310 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:52 crc kubenswrapper[4718]: E1123 14:46:52.441574 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.442591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.442632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.442648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.442672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.442688 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.545150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.545207 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.545224 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.545247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.545266 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.648723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.648808 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.648832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.648867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.648898 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.752104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.752180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.752209 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.752240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.752258 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.855856 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.855972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.855999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.856072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.856106 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.958417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.958518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.958536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.958564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:52 crc kubenswrapper[4718]: I1123 14:46:52.958580 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:52Z","lastTransitionTime":"2025-11-23T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.062518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.062580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.062598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.062623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.062640 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.165857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.165908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.165923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.165944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.165961 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.268981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.269024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.269040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.269061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.269077 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.371516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.371559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.371575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.371595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.371613 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.440892 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:53 crc kubenswrapper[4718]: E1123 14:46:53.441043 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.474353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.474387 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.474398 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.474412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.474423 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.577219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.577269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.577284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.577305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.577321 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.680562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.680607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.680625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.680647 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.680663 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.783549 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.783596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.783612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.783637 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.783653 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.888123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.888188 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.888211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.888240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.888264 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.991285 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.991328 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.991344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.991361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:53 crc kubenswrapper[4718]: I1123 14:46:53.991374 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:53Z","lastTransitionTime":"2025-11-23T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.094342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.094390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.094406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.094427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.094469 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.197775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.197839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.197855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.197883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.197906 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.300806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.300843 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.300851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.300864 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.300873 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.404020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.404102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.404124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.404156 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.404183 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.440626 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.440643 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:54 crc kubenswrapper[4718]: E1123 14:46:54.440859 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.440892 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:54 crc kubenswrapper[4718]: E1123 14:46:54.441028 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:54 crc kubenswrapper[4718]: E1123 14:46:54.441259 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.507230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.507286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.507302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.507326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.507342 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.610432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.610521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.610534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.610555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.610568 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.714150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.714213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.714230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.714251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.714315 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.816680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.816747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.816765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.816791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.816808 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.919913 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.919960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.919972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.919990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:54 crc kubenswrapper[4718]: I1123 14:46:54.920004 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:54Z","lastTransitionTime":"2025-11-23T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.023033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.023080 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.023091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.023109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.023121 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.124956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.125000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.125011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.125027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.125039 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.228147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.228230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.228256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.228289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.228313 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.331665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.331714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.331728 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.331745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.331757 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.434971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.435049 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.435073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.435105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.435127 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.440502 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:55 crc kubenswrapper[4718]: E1123 14:46:55.440709 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.537290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.537332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.537342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.537357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.537367 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.640194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.640237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.640278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.640296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.640305 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.743163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.743206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.743216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.743233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.743244 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.846577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.846611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.846621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.846636 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.846647 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.949911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.949960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.949969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.949984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:55 crc kubenswrapper[4718]: I1123 14:46:55.949993 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:55Z","lastTransitionTime":"2025-11-23T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.053930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.054519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.054547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.054572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.054589 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.157794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.157835 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.157845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.157858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.157868 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.260570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.260637 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.260655 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.260678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.260693 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.364108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.364155 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.364166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.364184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.364194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.440631 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.440683 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.440804 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:56 crc kubenswrapper[4718]: E1123 14:46:56.441003 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:56 crc kubenswrapper[4718]: E1123 14:46:56.441335 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:56 crc kubenswrapper[4718]: E1123 14:46:56.441599 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.466131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.466200 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.466214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.466240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.466256 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.569154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.569266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.569281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.569309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.569325 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.672949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.673011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.673022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.673040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.673055 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.776007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.776048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.776060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.776076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.776086 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.880008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.880063 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.880074 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.880093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.880105 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.982984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.983084 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.983102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.983135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:56 crc kubenswrapper[4718]: I1123 14:46:56.983158 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:56Z","lastTransitionTime":"2025-11-23T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.086811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.086867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.086880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.086900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.086913 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.190033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.190112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.190131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.190162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.190187 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.293545 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.293767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.293782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.293888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.293937 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.397516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.397549 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.397559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.397575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.397605 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.440588 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:57 crc kubenswrapper[4718]: E1123 14:46:57.440689 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.441906 4718 scope.go:117] "RemoveContainer" containerID="e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.501556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.501858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.501871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.501911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.501934 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.606755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.606806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.606824 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.606845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.606859 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.709967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.710027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.710044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.710067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.710085 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.812951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.812995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.813008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.813029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.813043 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.916175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.916219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.916236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.916257 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:57 crc kubenswrapper[4718]: I1123 14:46:57.916271 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:57Z","lastTransitionTime":"2025-11-23T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.000872 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/1.log" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.004147 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.005708 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.019029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.019097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.019108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.019131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.019161 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.020968 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.039341 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.050134 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.066760 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.083649 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.097387 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.115885 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.121664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.121708 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.121718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.121733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.121744 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.129702 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.143211 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.155512 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.174723 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:43Z\\\",\\\"message\\\":\\\"8519615025667110816) with []\\\\nI1123 14:46:43.133839 6272 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1123 14:46:43.133901 6272 factory.go:1336] Added *v1.Node event handler 7\\\\nI1123 14:46:43.133942 6272 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1123 14:46:43.134325 6272 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1123 14:46:43.134429 6272 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1123 14:46:43.134503 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 14:46:43.134553 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 14:46:43.134597 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 14:46:43.134614 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 14:46:43.134636 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 14:46:43.134661 6272 factory.go:656] Stopping watch factory\\\\nI1123 14:46:43.134676 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 14:46:43.134687 6272 ovnkube.go:599] Stopped ovnkube\\\\nI1123 14:46:43.134724 6272 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 14:46:43.134824 6272 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.193498 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.212014 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.225199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.225249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.225260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.225280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.225292 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.233849 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.249103 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.260246 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.272327 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.328511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.328556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.328570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.328587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.328597 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.432110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.432151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.432163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.432178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.432187 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.440614 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.440627 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.440734 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:46:58 crc kubenswrapper[4718]: E1123 14:46:58.440775 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:46:58 crc kubenswrapper[4718]: E1123 14:46:58.440887 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:46:58 crc kubenswrapper[4718]: E1123 14:46:58.441057 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.533688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.533747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.533757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.533770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.533781 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.635999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.636029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.636040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.636056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.636067 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.738905 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.738947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.738956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.738970 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.738979 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.841535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.841595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.841617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.841645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.841668 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.944066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.944143 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.944170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.944202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:58 crc kubenswrapper[4718]: I1123 14:46:58.944225 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:58Z","lastTransitionTime":"2025-11-23T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.009298 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/2.log" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.010339 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/1.log" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.013569 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" exitCode=1 Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.013633 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.013693 4718 scope.go:117] "RemoveContainer" containerID="e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.014903 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:46:59 crc kubenswrapper[4718]: E1123 14:46:59.015218 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.038293 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.047249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.047287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.047298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.047316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.047328 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.063670 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6a3af009a3bd9c863689f80f7245af64f81afee2717c35c9d8b41d381d0da62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:43Z\\\",\\\"message\\\":\\\"8519615025667110816) with []\\\\nI1123 14:46:43.133839 6272 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1123 14:46:43.133901 6272 factory.go:1336] Added *v1.Node event handler 7\\\\nI1123 14:46:43.133942 6272 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1123 14:46:43.134325 6272 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1123 14:46:43.134429 6272 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1123 14:46:43.134503 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 14:46:43.134553 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 14:46:43.134597 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 14:46:43.134614 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 14:46:43.134636 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 14:46:43.134661 6272 factory.go:656] Stopping watch factory\\\\nI1123 14:46:43.134676 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 14:46:43.134687 6272 ovnkube.go:599] Stopped ovnkube\\\\nI1123 14:46:43.134724 6272 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 14:46:43.134824 6272 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:58Z\\\",\\\"message\\\":\\\"etwork-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464887 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464894 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 14:46:58.464898 6533 lb_config.go:1031] Cluster endpoints for openshift-authentication/oauth-openshift for network=default are: map[]\\\\nF1123 14:46:58.464911 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 14:46:58.464917 6533 base_network_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.077196 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.089187 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.099894 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.111879 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.120589 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.130789 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.140818 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.148944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.148970 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.148981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.148998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.149009 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.151957 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.162408 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.172060 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.184561 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.194641 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.204332 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.222045 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.230366 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.253469 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.253525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.253541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.253564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.253579 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.356299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.356337 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.356348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.356363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.356376 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.439906 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:46:59 crc kubenswrapper[4718]: E1123 14:46:59.440020 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.459551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.459594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.459606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.459624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.459636 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.561820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.561854 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.561862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.561875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.561883 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.664800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.664849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.664860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.664879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.664891 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.767727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.767790 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.767807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.767831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.767848 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.872113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.872172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.872187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.872208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.872225 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.975430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.975521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.975539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.975562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:46:59 crc kubenswrapper[4718]: I1123 14:46:59.975581 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:46:59Z","lastTransitionTime":"2025-11-23T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.020618 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/2.log" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.025672 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:47:00 crc kubenswrapper[4718]: E1123 14:47:00.025911 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.043866 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.058036 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.078739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.078770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.078779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.078795 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.078805 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.079963 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.097790 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.150861 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.165660 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.179935 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.181544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.181570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.181578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.181591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.181599 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.200776 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:58Z\\\",\\\"message\\\":\\\"etwork-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464887 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464894 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 14:46:58.464898 6533 lb_config.go:1031] Cluster endpoints for openshift-authentication/oauth-openshift for network=default are: map[]\\\\nF1123 14:46:58.464911 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 14:46:58.464917 6533 base_network_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.225030 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.239842 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.257688 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.271825 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.284014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.284038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.284045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.284060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.284068 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.285635 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.304775 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.334212 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.368153 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.381674 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.386319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.386339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.386347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.386358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.386366 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.444898 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:00 crc kubenswrapper[4718]: E1123 14:47:00.445053 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.445537 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:00 crc kubenswrapper[4718]: E1123 14:47:00.445630 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.445692 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:00 crc kubenswrapper[4718]: E1123 14:47:00.445767 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.461741 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.483220 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.488954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.489013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.489029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.489052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.489070 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.497075 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.516122 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.530815 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.549722 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.565993 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.580418 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.591937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.591984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.592001 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.592024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.592043 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.593082 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.603706 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.622004 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.635431 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.652311 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.670414 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.685807 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.697470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.697531 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.697553 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.697582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.697604 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.706636 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:58Z\\\",\\\"message\\\":\\\"etwork-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464887 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464894 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 14:46:58.464898 6533 lb_config.go:1031] Cluster endpoints for openshift-authentication/oauth-openshift for network=default are: map[]\\\\nF1123 14:46:58.464911 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 14:46:58.464917 6533 base_network_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.722561 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.800603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.800666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.800689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.800717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.800741 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.903873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.903944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.903962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.903984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:00 crc kubenswrapper[4718]: I1123 14:47:00.904001 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:00Z","lastTransitionTime":"2025-11-23T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.006551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.006613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.006631 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.006654 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.006672 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.109385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.109422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.109431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.109458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.109467 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.212155 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.212216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.212232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.212257 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.212273 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.314959 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.315009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.315021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.315039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.315050 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.417981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.418081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.418101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.418125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.418144 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.441036 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:01 crc kubenswrapper[4718]: E1123 14:47:01.441237 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.521212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.521263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.521280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.521303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.521319 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.624238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.624303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.624320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.624349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.624374 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.726977 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.727035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.727052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.727077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.727094 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.830039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.830100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.830118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.830139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.830155 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.932569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.932628 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.932645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.932667 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:01 crc kubenswrapper[4718]: I1123 14:47:01.932684 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:01Z","lastTransitionTime":"2025-11-23T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.035984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.036054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.036072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.036101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.036118 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.101609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.101663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.101675 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.101697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.101710 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.121682 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:02Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.131480 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.131541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.131562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.131589 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.131610 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.150169 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:02Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.155356 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.155401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.155413 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.155431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.155813 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.170708 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:02Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.175010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.175063 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.175077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.175101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.175118 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.192456 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:02Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.197003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.197060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.197078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.197101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.197117 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.215342 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:02Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.215634 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.217610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.217659 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.217672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.217708 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.217723 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.319736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.319796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.319809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.319831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.319847 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.422265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.422310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.422320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.422334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.422343 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.440860 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.440891 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.440978 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.441112 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.441245 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:02 crc kubenswrapper[4718]: E1123 14:47:02.441490 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.524009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.524067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.524083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.524111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.524131 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.626894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.626931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.626941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.626960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.626972 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.728848 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.728893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.728905 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.728924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.728936 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.831298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.831355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.831372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.831396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.831414 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.933384 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.933422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.933466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.933485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:02 crc kubenswrapper[4718]: I1123 14:47:02.933500 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:02Z","lastTransitionTime":"2025-11-23T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.035629 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.035675 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.035692 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.035714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.035731 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.139015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.139082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.139099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.139123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.139140 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.241585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.241654 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.241677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.241710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.241734 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.344631 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.344710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.344735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.344766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.344793 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.440843 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:03 crc kubenswrapper[4718]: E1123 14:47:03.441059 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.447482 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.447537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.447598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.447633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.447658 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.550316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.550379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.550396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.550421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.550463 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.653711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.653754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.653771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.653798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.653815 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.756865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.756927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.756950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.756982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.757002 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.859894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.859959 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.859982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.860013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.860043 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.962409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.962509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.962528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.962552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:03 crc kubenswrapper[4718]: I1123 14:47:03.962569 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:03Z","lastTransitionTime":"2025-11-23T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.065400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.065463 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.065475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.065491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.065501 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.168937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.169004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.169025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.169051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.169070 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.271377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.271468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.271482 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.271500 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.271513 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.375416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.375513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.375541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.375573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.375597 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.440817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.440927 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:04 crc kubenswrapper[4718]: E1123 14:47:04.441011 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.441090 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:04 crc kubenswrapper[4718]: E1123 14:47:04.441316 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:04 crc kubenswrapper[4718]: E1123 14:47:04.441581 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.477953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.478018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.478040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.478063 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.478081 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.582252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.582328 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.582346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.582379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.582400 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.684988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.685070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.685094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.685171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.685197 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.788077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.788132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.788148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.788170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.788188 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.890696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.890793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.890812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.890836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.890853 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.994362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.994567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.994586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.995025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:04 crc kubenswrapper[4718]: I1123 14:47:04.995079 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:04Z","lastTransitionTime":"2025-11-23T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.099111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.099169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.099178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.099195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.099204 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.201921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.201966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.201983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.202006 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.202023 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.304167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.304209 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.304225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.304247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.304263 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.408198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.408262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.408278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.408299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.408317 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.439962 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:05 crc kubenswrapper[4718]: E1123 14:47:05.440186 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.511342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.511410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.511466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.511501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.511523 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.614433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.614529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.614548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.614573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.614589 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.717910 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.717953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.717966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.717985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.717998 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.822528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.822606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.822629 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.822660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.822686 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.925233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.925265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.925274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.925287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:05 crc kubenswrapper[4718]: I1123 14:47:05.925296 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:05Z","lastTransitionTime":"2025-11-23T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.027399 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.027484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.027494 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.027510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.027521 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.129966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.130021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.130042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.130065 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.130083 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.233393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.233485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.233694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.233722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.233744 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.336497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.336611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.336637 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.336665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.336688 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.439761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.439842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.439853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.439873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.439886 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.440321 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:06 crc kubenswrapper[4718]: E1123 14:47:06.440617 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.440714 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:06 crc kubenswrapper[4718]: E1123 14:47:06.440909 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.440726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:06 crc kubenswrapper[4718]: E1123 14:47:06.441103 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.542817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.542878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.542889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.542907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.542920 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.646412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.646534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.646554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.646583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.646604 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.749710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.749773 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.749789 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.749813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.749832 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.853409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.853544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.853570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.853601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.853624 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.961300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.961362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.961381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.961416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:06 crc kubenswrapper[4718]: I1123 14:47:06.961434 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:06Z","lastTransitionTime":"2025-11-23T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.064565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.064625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.064641 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.064664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.064682 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.166852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.166919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.166942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.166971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.166997 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.270190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.270232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.270245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.270269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.270283 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.373570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.373625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.373642 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.373663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.373678 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.440106 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:07 crc kubenswrapper[4718]: E1123 14:47:07.440299 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.476603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.476659 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.476677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.476699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.476716 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.581657 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.581752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.581829 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.581859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.581878 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.684473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.684533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.684550 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.684574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.684595 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.787493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.787566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.787590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.787621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.787643 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.891090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.891173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.891198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.891229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.891280 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.993911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.993975 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.993993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.994017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:07 crc kubenswrapper[4718]: I1123 14:47:07.994035 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:07Z","lastTransitionTime":"2025-11-23T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.096754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.096832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.096856 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.096885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.096907 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.199068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.199139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.199159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.199185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.199202 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.302646 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.302715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.302736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.302765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.302786 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.352188 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:08 crc kubenswrapper[4718]: E1123 14:47:08.352430 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:47:08 crc kubenswrapper[4718]: E1123 14:47:08.352674 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs podName:c82c8ca1-7a30-47c8-a679-abe265aca15b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.352640714 +0000 UTC m=+111.592260588 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs") pod "network-metrics-daemon-7qh4j" (UID: "c82c8ca1-7a30-47c8-a679-abe265aca15b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.405901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.405953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.405971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.405995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.406012 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.440720 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:08 crc kubenswrapper[4718]: E1123 14:47:08.440909 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.440945 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:08 crc kubenswrapper[4718]: E1123 14:47:08.441112 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.440751 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:08 crc kubenswrapper[4718]: E1123 14:47:08.441243 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.509319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.509360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.509368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.509383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.509393 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.612547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.612640 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.612664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.612697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.612721 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.716573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.716709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.716727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.716752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.716768 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.824106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.824265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.824291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.824321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.824387 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.927809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.927916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.927936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.927963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:08 crc kubenswrapper[4718]: I1123 14:47:08.927980 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:08Z","lastTransitionTime":"2025-11-23T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.031036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.031078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.031091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.031107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.031174 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.133770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.133840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.133857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.133881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.133905 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.237261 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.237370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.237388 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.237413 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.237431 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.339954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.340029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.340051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.340081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.340105 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.440224 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:09 crc kubenswrapper[4718]: E1123 14:47:09.440468 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.442609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.442668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.442686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.442707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.442723 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.545736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.545814 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.545837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.545862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.545879 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.648429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.648496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.648508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.648526 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.648537 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.751243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.751282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.751294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.751310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.751321 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.854801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.854873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.854895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.854940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.854962 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.958685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.958740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.958756 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.958781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:09 crc kubenswrapper[4718]: I1123 14:47:09.958798 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:09Z","lastTransitionTime":"2025-11-23T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.062241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.062304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.062322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.062346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.062363 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.164811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.164893 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.164911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.164933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.164950 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.268432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.268510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.268528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.268552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.268569 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.371133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.371179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.371195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.371217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.371234 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.441018 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:10 crc kubenswrapper[4718]: E1123 14:47:10.441208 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.441566 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:10 crc kubenswrapper[4718]: E1123 14:47:10.441669 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.442607 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:10 crc kubenswrapper[4718]: E1123 14:47:10.443009 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.460363 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.474529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.474578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.474597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.474622 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.474640 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.476311 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.495855 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.514911 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.533823 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.552961 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.570069 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.577401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.577942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.578781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.579062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.579275 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.593358 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.614619 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.640588 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.656846 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.671596 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.688654 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.688693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.688704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.688721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.688733 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.693970 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:58Z\\\",\\\"message\\\":\\\"etwork-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464887 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464894 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 14:46:58.464898 6533 lb_config.go:1031] Cluster endpoints for openshift-authentication/oauth-openshift for network=default are: map[]\\\\nF1123 14:46:58.464911 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 14:46:58.464917 6533 base_network_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.713358 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.725825 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.742133 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.760901 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:10Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.790671 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.790720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.790732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.790747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.790758 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.897894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.897978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.898000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.898059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:10 crc kubenswrapper[4718]: I1123 14:47:10.898079 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:10Z","lastTransitionTime":"2025-11-23T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.001112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.001182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.001196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.001212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.001224 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.103554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.103610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.103627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.103650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.103670 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.206466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.206506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.206516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.206531 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.206544 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.309214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.309271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.309291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.309313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.309329 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.487792 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:11 crc kubenswrapper[4718]: E1123 14:47:11.487952 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.490113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.490149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.490158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.490173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.490183 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.593347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.593393 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.593404 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.593417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.593427 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.696610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.696663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.696676 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.696692 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.696705 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.799117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.799149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.799156 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.799169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.799177 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.901627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.901693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.901713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.901740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:11 crc kubenswrapper[4718]: I1123 14:47:11.901759 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:11Z","lastTransitionTime":"2025-11-23T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.004954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.005012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.005028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.005054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.005071 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.108389 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.108521 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.108552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.108585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.108607 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.212167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.212236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.212253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.212280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.212299 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.315182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.315250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.315269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.315297 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.315320 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.317294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.317369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.317562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.317592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.317613 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.338425 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:12Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.343512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.343568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.343630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.343689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.343709 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.365719 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:12Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.370981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.371048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.371072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.371104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.371126 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.392681 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:12Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.398020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.398089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.398107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.398133 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.398151 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.417352 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:12Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.422327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.422378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.422396 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.422421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.422476 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.441056 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.441265 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.441661 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.441730 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.441790 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.441860 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.442624 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.442785 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.449133 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"615b60d9-25a6-45a0-a365-4c423e4d937a\\\",\\\"systemUUID\\\":\\\"18ae8787-5c21-4432-a923-66f25f4a0fdf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:12Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:12 crc kubenswrapper[4718]: E1123 14:47:12.449482 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.451262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.451311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.451327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.451346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.451359 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.554072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.554144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.554162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.554185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.554202 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.657519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.657653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.657680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.657712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.657732 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.760761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.760931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.760954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.760983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.761000 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.864326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.864413 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.864431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.864492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.864510 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.967555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.967618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.967654 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.967688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:12 crc kubenswrapper[4718]: I1123 14:47:12.967711 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:12Z","lastTransitionTime":"2025-11-23T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.070779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.070860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.070882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.070910 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.070931 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.173539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.173662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.173684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.173709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.173731 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.276883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.276944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.276967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.276995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.277018 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.380715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.380794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.380816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.380846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.380868 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.440720 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:13 crc kubenswrapper[4718]: E1123 14:47:13.440949 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.483661 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.483724 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.483748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.483775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.483797 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.587704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.587770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.587811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.587834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.587850 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.690885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.690945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.690962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.690991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.691008 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.794565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.794667 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.794688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.794745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.794765 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.898217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.898281 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.898300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.898329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:13 crc kubenswrapper[4718]: I1123 14:47:13.898346 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:13Z","lastTransitionTime":"2025-11-23T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.001625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.001683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.001699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.001722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.001738 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.075129 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qb66k_49e539fc-7a1f-42e0-9a69-230331321d85/kube-multus/0.log" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.075214 4718 generic.go:334] "Generic (PLEG): container finished" podID="49e539fc-7a1f-42e0-9a69-230331321d85" containerID="d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d" exitCode=1 Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.075307 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qb66k" event={"ID":"49e539fc-7a1f-42e0-9a69-230331321d85","Type":"ContainerDied","Data":"d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.076321 4718 scope.go:117] "RemoveContainer" containerID="d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.096580 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.105370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.105408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.105420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.105462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.105478 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.130266 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:58Z\\\",\\\"message\\\":\\\"etwork-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464887 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464894 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 14:46:58.464898 6533 lb_config.go:1031] Cluster endpoints for openshift-authentication/oauth-openshift for network=default are: map[]\\\\nF1123 14:46:58.464911 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 14:46:58.464917 6533 base_network_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.148861 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.174692 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.193036 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.209018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.209080 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.209097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.209122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.209146 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.211992 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.228365 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.251850 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.269810 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.290642 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.309696 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.313551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.313644 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.313673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.313709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.313747 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.337565 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.354219 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.373534 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.390130 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:47:13Z\\\",\\\"message\\\":\\\"2025-11-23T14:46:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7306732d-74b9-4dc5-a4cf-6178f786a72b\\\\n2025-11-23T14:46:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7306732d-74b9-4dc5-a4cf-6178f786a72b to /host/opt/cni/bin/\\\\n2025-11-23T14:46:28Z [verbose] multus-daemon started\\\\n2025-11-23T14:46:28Z [verbose] Readiness Indicator file check\\\\n2025-11-23T14:47:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.407237 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.419635 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.419725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.419751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.419776 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.419795 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.429555 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:14Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.440844 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.440970 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:14 crc kubenswrapper[4718]: E1123 14:47:14.441223 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.441275 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:14 crc kubenswrapper[4718]: E1123 14:47:14.441406 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:14 crc kubenswrapper[4718]: E1123 14:47:14.441703 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.522612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.522681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.522704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.522733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.522756 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.625070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.625112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.625125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.625142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.625153 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.728428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.728531 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.728553 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.728583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.728605 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.831908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.831992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.832018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.832044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.832062 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.933917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.933953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.933964 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.933979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:14 crc kubenswrapper[4718]: I1123 14:47:14.933992 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:14Z","lastTransitionTime":"2025-11-23T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.036320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.036402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.036423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.036488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.036505 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.082857 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qb66k_49e539fc-7a1f-42e0-9a69-230331321d85/kube-multus/0.log" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.082941 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qb66k" event={"ID":"49e539fc-7a1f-42e0-9a69-230331321d85","Type":"ContainerStarted","Data":"83e7c0362ec8bf0fdd30ec09b91cc3b684584ac51f4c998ff677399de339d44e"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.105919 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f232400-866e-484d-bade-2e896d8dfc32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e20845d2729b2dde68c25dc7851fbc82e4de7e7391d7d00e09f49f880eb5b9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf5edfdf6a73c524ff810cb454292dc8ff28c4b415f7c90792f3eda7278593cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d071d6ab05102ffd7e36d8f7fd7571ca4f305fc5900b5bca1dfd4eaa2236ac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bef58635ac91dee3ae059f1e1a13f5fd717c00e7d13ae580a6a36c9ed22685f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d252ca26c1d879924892a6cf1404523a37060de35995ce101ad408602af09b21\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 14:46:13.622726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 14:46:13.623081 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 14:46:13.625142 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923425714/tls.crt::/tmp/serving-cert-1923425714/tls.key\\\\\\\"\\\\nI1123 14:46:13.999368 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 14:46:14.004697 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 14:46:14.004720 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 14:46:14.004745 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 14:46:14.004751 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 14:46:14.011865 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 14:46:14.011885 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 14:46:14.011893 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 14:46:14.011906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 14:46:14.011911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 14:46:14.011915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 14:46:14.011919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 14:46:14.013385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3609620b7728788e8ff0081338bfcbc4a35a50e16794b474eb11268fec69cae5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd7893b5c0bc8f9cd9abc83508f005c51a4fae850bda6b118bd70fc87e7313c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.126303 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5081b466-d08b-4432-92f6-7f7f1c9fb607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e828a18d60c1a77b19b2a707494209d60b21e5a11007c4029bcfffcc632866b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04b9e9ababbd821f8c8d2d640867c41d6918182b6869ee198fee85a37934e530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac138ef465b8fb107aa6828f2d531a40cae862514aa9629e5b7bf88c2348c3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eddb9085055a7b1def92fd870c632d4c1b6bb31c15cf964fa586cf0ed50cbc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.139000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.139064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.139087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.139116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.139134 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.145637 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63dfd72c1d7cd22f38977434f99f0b032123eedbd0c2c3c6e76d16151730b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f94f414e5cadc5be460a0f60dafd19abd84cb81b35e4267cfcb48c1b2ddfc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.165356 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qb66k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e539fc-7a1f-42e0-9a69-230331321d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e7c0362ec8bf0fdd30ec09b91cc3b684584ac51f4c998ff677399de339d44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:47:13Z\\\",\\\"message\\\":\\\"2025-11-23T14:46:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7306732d-74b9-4dc5-a4cf-6178f786a72b\\\\n2025-11-23T14:46:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7306732d-74b9-4dc5-a4cf-6178f786a72b to /host/opt/cni/bin/\\\\n2025-11-23T14:46:28Z [verbose] multus-daemon started\\\\n2025-11-23T14:46:28Z [verbose] Readiness Indicator file check\\\\n2025-11-23T14:47:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7ncn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qb66k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.181577 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82c8ca1-7a30-47c8-a679-abe265aca15b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96ff9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7qh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.201241 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.231962 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa4a9264-1cb9-41bc-a30a-4e09bde21387\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T14:46:58Z\\\",\\\"message\\\":\\\"etwork-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464887 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1123 14:46:58.464894 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 14:46:58.464898 6533 lb_config.go:1031] Cluster endpoints for openshift-authentication/oauth-openshift for network=default are: map[]\\\\nF1123 14:46:58.464911 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:46:58Z is after 2025-08-24T17:21:41Z]\\\\nI1123 14:46:58.464917 6533 base_network_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zjskv_openshift-ovn-kubernetes(aa4a9264-1cb9-41bc-a30a-4e09bde21387)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv7dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zjskv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.241859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.241931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.241955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.241985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.242011 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.251521 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad989470-f62f-4c09-a038-600445a0bef9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1435109324c9f93586933d8583b63776eca644bd447bad750c4a692226bf2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e145cb8fa4966f57d40558e9b6b723ac0612635b0e55eec45a6e095da689c590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5tcvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.272317 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.291833 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.340324 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-557f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d7996bb-8907-49ac-afb1-a8de8d2553c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053cb938dee3d8b74728c76a43dced53bea21f7e4385be0c1d5343cb36bf6767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777626a215f4b3af4149aa5f7e32e0a211e5c23cabb5c49e160dd8b676ce53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3415554d5fee55745befbbd1faaafc5585696ed564001eef186a5bdeec790fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9575e0198a0c8fdd83b879d1f25a296a796cea65b1b38b2e2311ea82f5866816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0478fed9abc8819b7c348ebdee1aac664846a6a7f1bba5cc2d29b552073cdc3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a502b73f148ec47e808a41926d4026d1157ee3780967c0b5060858a7db4d32ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53bb4f7bfdd345ccc17db7e5b8d598ece406810e935cbecf83b2c5cbfe932cfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx62w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-557f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.345476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.345516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.345528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.345546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.345561 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.357249 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv78j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bba146b-d0e0-40f1-afc3-ac9cf1d28bf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77b563e37642a75d0d627d41adbd60e1f14d9df156f45ddf344653bb4c3c4321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbhwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv78j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.373769 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"874b66df-e87a-4aab-9210-39f5fca306a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65f63edd0ea1f37724e3dade7c5b3178687406f9434f37fb889216e6f16e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e22f221559075022d2872cb6981de39d1c83e5b5b462971091d346e8b58f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef14058c5a5a7724864b47d5f4131bb6004f6649cf290724befbf58e19d411e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c63aadd9cc97e5c0e79b67a25aa07794f73cc9bceaa2d9de0e16caa129c9e27d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.393879 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c799b2a3e73082511bfe1e61fad76be32e2e5232b1362c0493ad0599b63e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.409918 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22aa863df87f130fc2a2f98dbd155bc97dac4c37574bc29dfb3cd871e0dd6ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.422150 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://202a9444cb36b7fd888a13eb0bbb78265a0406f81572d89870baae9de5419376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hkdqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.434804 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z75dw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72846732-1e66-4f5b-9b12-2a3a9bf21672\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6cb883c2cdb7c5d721d9b6f4a72749a021d51048a4e4f16f7e552d013ab3301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmt45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:46:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z75dw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:15Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.439958 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:15 crc kubenswrapper[4718]: E1123 14:47:15.440105 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.448018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.448089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.448113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.448139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.448161 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.455121 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.551138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.551169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.551179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.551191 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.551201 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.654633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.654691 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.654706 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.654727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.654742 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.758145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.758209 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.758227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.758250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.758267 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.860368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.860423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.860484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.860518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.860536 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.962547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.962621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.962639 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.962663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:15 crc kubenswrapper[4718]: I1123 14:47:15.962680 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:15Z","lastTransitionTime":"2025-11-23T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.065335 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.065401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.065417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.065479 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.065500 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.167843 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.167926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.167947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.167978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.167996 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.270590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.270676 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.270735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.270760 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.270777 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.373930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.374008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.374050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.374097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.374113 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.441115 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.441137 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.441226 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:16 crc kubenswrapper[4718]: E1123 14:47:16.441271 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:16 crc kubenswrapper[4718]: E1123 14:47:16.441316 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:16 crc kubenswrapper[4718]: E1123 14:47:16.441460 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.477772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.477835 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.477857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.477886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.477904 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.581326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.581391 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.581418 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.581468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.581480 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.684308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.684368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.684379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.684400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.684418 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.787497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.787569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.787588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.787616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.787636 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.891241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.891309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.891332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.891361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.891380 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.995070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.995198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.995220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.995247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:16 crc kubenswrapper[4718]: I1123 14:47:16.995267 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:16Z","lastTransitionTime":"2025-11-23T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.098648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.098708 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.098723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.098744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.098759 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.201267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.201316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.201326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.201345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.201359 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.305540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.305617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.305638 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.305665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.305688 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.409230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.409308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.409327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.409365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.409388 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.440842 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:17 crc kubenswrapper[4718]: E1123 14:47:17.441055 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.513280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.513376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.513403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.513470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.513513 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.616624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.616695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.616714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.616740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.616765 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.721034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.721128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.721154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.721187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.721210 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.824269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.824331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.824348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.824372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.824391 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.927390 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.927477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.927491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.927508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:17 crc kubenswrapper[4718]: I1123 14:47:17.927520 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:17Z","lastTransitionTime":"2025-11-23T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.029969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.030029 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.030041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.030062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.030076 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.133843 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.133940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.133966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.134001 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.134028 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.237642 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.237722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.237738 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.237767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.237787 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.269386 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.269699 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:48:22.269657581 +0000 UTC m=+153.509277465 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.341201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.341253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.341268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.341288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.341301 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.370914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.371016 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.371076 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.371117 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371169 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371319 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371321 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:48:22.371287026 +0000 UTC m=+153.610906910 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371344 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371379 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371403 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371395 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371509 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371542 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371410 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 14:48:22.371384309 +0000 UTC m=+153.611004183 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371619 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 14:48:22.371603444 +0000 UTC m=+153.611223508 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.371637 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 14:48:22.371629655 +0000 UTC m=+153.611249499 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.441032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.441163 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.441289 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.441428 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.441632 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:18 crc kubenswrapper[4718]: E1123 14:47:18.441847 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.444083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.444129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.444146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.444167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.444186 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.461604 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.547180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.547325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.547359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.547388 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.547412 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.650558 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.650616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.650632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.650653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.650670 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.755646 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.755729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.755751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.755780 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.755804 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.860505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.860592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.860610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.860652 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.860676 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.964141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.964202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.964223 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.964249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:18 crc kubenswrapper[4718]: I1123 14:47:18.964267 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:18Z","lastTransitionTime":"2025-11-23T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.067695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.067769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.067786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.067812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.067827 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.171009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.171078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.171095 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.171126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.171150 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.275021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.275080 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.275091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.275115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.275127 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.378821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.378890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.378909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.378933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.378951 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.440671 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:19 crc kubenswrapper[4718]: E1123 14:47:19.440864 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.481926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.481984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.482001 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.482025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.482042 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.584474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.584519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.584527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.584541 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.584552 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.687524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.687568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.687577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.687591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.687603 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.789844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.789894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.789906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.789924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.789935 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.891868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.891930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.891949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.891972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.891990 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.994587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.994642 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.994660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.994684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:19 crc kubenswrapper[4718]: I1123 14:47:19.994702 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:19Z","lastTransitionTime":"2025-11-23T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.099017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.099086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.099102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.099125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.099142 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.201819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.201895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.201919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.201950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.201972 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.304674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.304711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.304722 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.304740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.304751 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.408173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.408231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.408253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.408278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.408295 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.440240 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.440295 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.440486 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:20 crc kubenswrapper[4718]: E1123 14:47:20.440481 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:20 crc kubenswrapper[4718]: E1123 14:47:20.440762 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:20 crc kubenswrapper[4718]: E1123 14:47:20.440896 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.472800 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6174f37b-42a1-4f11-a314-d9f6e316d7df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f5a5f59e7490aa29c34158b488104fb5b7a97c4d592b798020480ec6512a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3135ef1e720022c5c8840c06e9fd4b9b22fa85220e89c94b0504c25b1f8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ede509d54bf79c21ef203d95bff038b24fb6a06742d0d37c8ec4d7739bb4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064801ca5a1264542db6002ef94ea9fff905928bcbd8637c7c1b6aa72f8080b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a90f00d571d0e6adc9a76ef89739e8c33c11a9d2064500c87aef98e1633af563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T14:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8f18c498c78029693d699412f511a760b5b633de5ee6ebec7f71abcea1aa5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8f18c498c78029693d699412f511a760b5b633de5ee6ebec7f71abcea1aa5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae197633d78076184167ffd812868272b8be041c8ad9e2fc56d54d8ed5cd7e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae197633d78076184167ffd812868272b8be041c8ad9e2fc56d54d8ed5cd7e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://780ffe920ee499dfcc85eb6fb05256e8e66d53cb3405bcc304d608bd011460a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://780ffe920ee499dfcc85eb6fb05256e8e66d53cb3405bcc304d608bd011460a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T14:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T14:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T14:45:50Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.491904 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T14:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T14:47:20Z is after 2025-08-24T17:21:41Z" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.511312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.511349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.511357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.511372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.511381 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.543881 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-557f4" podStartSLOduration=59.543858252 podStartE2EDuration="59.543858252s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.543839622 +0000 UTC m=+91.783459486" watchObservedRunningTime="2025-11-23 14:47:20.543858252 +0000 UTC m=+91.783478106" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.560190 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kv78j" podStartSLOduration=59.560166556 podStartE2EDuration="59.560166556s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.559856748 +0000 UTC m=+91.799476612" watchObservedRunningTime="2025-11-23 14:47:20.560166556 +0000 UTC m=+91.799786400" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.577464 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=62.577432333 podStartE2EDuration="1m2.577432333s" podCreationTimestamp="2025-11-23 14:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.575272239 +0000 UTC m=+91.814892083" watchObservedRunningTime="2025-11-23 14:47:20.577432333 +0000 UTC m=+91.817052177" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.613721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.613750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.613759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.613775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.613787 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.628134 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podStartSLOduration=59.628116287 podStartE2EDuration="59.628116287s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.613574759 +0000 UTC m=+91.853194633" watchObservedRunningTime="2025-11-23 14:47:20.628116287 +0000 UTC m=+91.867736131" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.648473 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-z75dw" podStartSLOduration=59.648417322 podStartE2EDuration="59.648417322s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.628307212 +0000 UTC m=+91.867927076" watchObservedRunningTime="2025-11-23 14:47:20.648417322 +0000 UTC m=+91.888037176" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.664699 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.664671704 podStartE2EDuration="40.664671704s" podCreationTimestamp="2025-11-23 14:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.664568871 +0000 UTC m=+91.904188715" watchObservedRunningTime="2025-11-23 14:47:20.664671704 +0000 UTC m=+91.904291568" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.664898 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.664889429 podStartE2EDuration="1m6.664889429s" podCreationTimestamp="2025-11-23 14:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.648009262 +0000 UTC m=+91.887629116" watchObservedRunningTime="2025-11-23 14:47:20.664889429 +0000 UTC m=+91.904509293" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.706716 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qb66k" podStartSLOduration=59.706683878 podStartE2EDuration="59.706683878s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.695250028 +0000 UTC m=+91.934869892" watchObservedRunningTime="2025-11-23 14:47:20.706683878 +0000 UTC m=+91.946303732" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.716876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.716950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.716973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.716997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.717013 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.717807 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.71778137 podStartE2EDuration="2.71778137s" podCreationTimestamp="2025-11-23 14:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.71626429 +0000 UTC m=+91.955884144" watchObservedRunningTime="2025-11-23 14:47:20.71778137 +0000 UTC m=+91.957401214" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.769297 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5tcvp" podStartSLOduration=59.769281294 podStartE2EDuration="59.769281294s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:20.768982966 +0000 UTC m=+92.008602830" watchObservedRunningTime="2025-11-23 14:47:20.769281294 +0000 UTC m=+92.008901138" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.819962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.820003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.820014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.820030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.820042 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.922409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.922461 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.922470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.922485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:20 crc kubenswrapper[4718]: I1123 14:47:20.922494 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:20Z","lastTransitionTime":"2025-11-23T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.025512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.025561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.025572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.025589 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.025604 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.129779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.129826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.129842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.130045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.130066 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.232304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.232385 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.232409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.232490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.232519 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.336179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.336245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.336262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.336286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.336304 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.439484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.439591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.439610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.439633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.439653 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.439909 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:21 crc kubenswrapper[4718]: E1123 14:47:21.440059 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.542236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.542287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.542304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.542332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.542352 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.645027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.645100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.645113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.645130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.645143 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.748792 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.748844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.748857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.748904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.748918 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.852634 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.852727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.852749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.852769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.852783 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.955579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.955642 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.955664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.955696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:21 crc kubenswrapper[4718]: I1123 14:47:21.955719 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:21Z","lastTransitionTime":"2025-11-23T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.059142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.059198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.059217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.059239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.059258 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.162115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.162159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.162173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.162193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.162240 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.264731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.264799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.264815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.264841 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.264869 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.368501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.368566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.368588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.368616 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.368637 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.440420 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:22 crc kubenswrapper[4718]: E1123 14:47:22.440637 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.440423 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:22 crc kubenswrapper[4718]: E1123 14:47:22.440752 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.440415 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:22 crc kubenswrapper[4718]: E1123 14:47:22.440865 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.471162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.471218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.471241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.471269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.471293 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.574135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.574174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.574182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.574196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.574204 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.684225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.684280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.684298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.684320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.684339 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.696419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.696505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.696523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.696544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.696561 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T14:47:22Z","lastTransitionTime":"2025-11-23T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.750906 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc"] Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.751706 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.756136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.756500 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.756847 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.757131 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.792292 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.79226001 podStartE2EDuration="7.79226001s" podCreationTimestamp="2025-11-23 14:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:22.789053899 +0000 UTC m=+94.028673783" watchObservedRunningTime="2025-11-23 14:47:22.79226001 +0000 UTC m=+94.031879894" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.821208 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6285196-c927-427a-a9b1-1f15c99d0688-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.821257 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6285196-c927-427a-a9b1-1f15c99d0688-service-ca\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.821299 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6285196-c927-427a-a9b1-1f15c99d0688-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.821324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d6285196-c927-427a-a9b1-1f15c99d0688-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.821357 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d6285196-c927-427a-a9b1-1f15c99d0688-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.922547 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6285196-c927-427a-a9b1-1f15c99d0688-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.922612 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6285196-c927-427a-a9b1-1f15c99d0688-service-ca\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.922695 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6285196-c927-427a-a9b1-1f15c99d0688-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.922762 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d6285196-c927-427a-a9b1-1f15c99d0688-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.922811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d6285196-c927-427a-a9b1-1f15c99d0688-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.922961 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d6285196-c927-427a-a9b1-1f15c99d0688-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.922958 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d6285196-c927-427a-a9b1-1f15c99d0688-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.924607 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6285196-c927-427a-a9b1-1f15c99d0688-service-ca\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.932281 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6285196-c927-427a-a9b1-1f15c99d0688-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:22 crc kubenswrapper[4718]: I1123 14:47:22.952830 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6285196-c927-427a-a9b1-1f15c99d0688-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-td5hc\" (UID: \"d6285196-c927-427a-a9b1-1f15c99d0688\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:23 crc kubenswrapper[4718]: I1123 14:47:23.076979 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" Nov 23 14:47:23 crc kubenswrapper[4718]: W1123 14:47:23.098001 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6285196_c927_427a_a9b1_1f15c99d0688.slice/crio-c0c2e55f2003768573dd66ea0f29b73c80b9509226042154f825a12b22fae04c WatchSource:0}: Error finding container c0c2e55f2003768573dd66ea0f29b73c80b9509226042154f825a12b22fae04c: Status 404 returned error can't find the container with id c0c2e55f2003768573dd66ea0f29b73c80b9509226042154f825a12b22fae04c Nov 23 14:47:23 crc kubenswrapper[4718]: I1123 14:47:23.116970 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" event={"ID":"d6285196-c927-427a-a9b1-1f15c99d0688","Type":"ContainerStarted","Data":"c0c2e55f2003768573dd66ea0f29b73c80b9509226042154f825a12b22fae04c"} Nov 23 14:47:23 crc kubenswrapper[4718]: I1123 14:47:23.440207 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:23 crc kubenswrapper[4718]: E1123 14:47:23.440720 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:24 crc kubenswrapper[4718]: I1123 14:47:24.123290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" event={"ID":"d6285196-c927-427a-a9b1-1f15c99d0688","Type":"ContainerStarted","Data":"6e1ae1e00a724d3140682868a100987bb497275bd64a6844d76de97ef9d5b9d7"} Nov 23 14:47:24 crc kubenswrapper[4718]: I1123 14:47:24.143890 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-td5hc" podStartSLOduration=63.143865955 podStartE2EDuration="1m3.143865955s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:24.141676 +0000 UTC m=+95.381295884" watchObservedRunningTime="2025-11-23 14:47:24.143865955 +0000 UTC m=+95.383485839" Nov 23 14:47:24 crc kubenswrapper[4718]: I1123 14:47:24.440767 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:24 crc kubenswrapper[4718]: I1123 14:47:24.440780 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:24 crc kubenswrapper[4718]: E1123 14:47:24.440973 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:24 crc kubenswrapper[4718]: I1123 14:47:24.440810 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:24 crc kubenswrapper[4718]: E1123 14:47:24.441179 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:24 crc kubenswrapper[4718]: E1123 14:47:24.441039 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:25 crc kubenswrapper[4718]: I1123 14:47:25.440559 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:25 crc kubenswrapper[4718]: E1123 14:47:25.440804 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:26 crc kubenswrapper[4718]: I1123 14:47:26.440163 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:26 crc kubenswrapper[4718]: I1123 14:47:26.440163 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:26 crc kubenswrapper[4718]: E1123 14:47:26.441009 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:26 crc kubenswrapper[4718]: I1123 14:47:26.440275 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:26 crc kubenswrapper[4718]: E1123 14:47:26.441147 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:26 crc kubenswrapper[4718]: E1123 14:47:26.441331 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:27 crc kubenswrapper[4718]: I1123 14:47:27.440947 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:27 crc kubenswrapper[4718]: E1123 14:47:27.441203 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:27 crc kubenswrapper[4718]: I1123 14:47:27.442334 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.146910 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/2.log" Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.150041 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerStarted","Data":"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387"} Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.150924 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.441079 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:28 crc kubenswrapper[4718]: E1123 14:47:28.441323 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.441773 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:28 crc kubenswrapper[4718]: E1123 14:47:28.441901 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.442187 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:28 crc kubenswrapper[4718]: E1123 14:47:28.442366 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.449028 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podStartSLOduration=67.449010285 podStartE2EDuration="1m7.449010285s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:28.185422917 +0000 UTC m=+99.425042781" watchObservedRunningTime="2025-11-23 14:47:28.449010285 +0000 UTC m=+99.688630139" Nov 23 14:47:28 crc kubenswrapper[4718]: I1123 14:47:28.450470 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7qh4j"] Nov 23 14:47:29 crc kubenswrapper[4718]: I1123 14:47:29.151837 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:29 crc kubenswrapper[4718]: E1123 14:47:29.151951 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:29 crc kubenswrapper[4718]: I1123 14:47:29.440781 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:29 crc kubenswrapper[4718]: E1123 14:47:29.440946 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 14:47:30 crc kubenswrapper[4718]: I1123 14:47:30.440161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:30 crc kubenswrapper[4718]: I1123 14:47:30.440161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:30 crc kubenswrapper[4718]: I1123 14:47:30.440291 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:30 crc kubenswrapper[4718]: E1123 14:47:30.442330 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7qh4j" podUID="c82c8ca1-7a30-47c8-a679-abe265aca15b" Nov 23 14:47:30 crc kubenswrapper[4718]: E1123 14:47:30.442402 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 14:47:30 crc kubenswrapper[4718]: E1123 14:47:30.442522 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.269060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.269266 4718 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.317284 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gwk4k"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.318425 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.325193 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xwp8r"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.327375 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.327912 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.327949 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.329401 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7vdf5"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.330400 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.337136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.337177 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.337590 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.337742 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.337882 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.337994 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.338523 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.338667 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.338776 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.338886 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339215 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339378 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339578 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339716 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339847 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339973 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.340027 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.340101 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339981 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.340192 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.339720 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.340690 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.341120 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.341201 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.341433 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.341976 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.342213 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.343123 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7hg9r"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.343619 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nfkmp"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.344023 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.344431 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7hg9r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.352095 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.353499 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.354004 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.354561 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lglwp"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.355423 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.360966 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.391685 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.393064 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tsbrh"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.393754 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.394694 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.395024 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.395168 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.395356 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.395540 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.395675 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.395756 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sn5vq"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.395954 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.396117 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.396256 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.396255 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.400131 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.401801 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.401926 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402056 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402110 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402228 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402288 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402669 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402724 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402729 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402806 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.402964 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.403134 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.403844 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.404225 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.404358 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.404537 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.404705 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.404850 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.405007 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.405064 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.405268 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.406744 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.407247 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.407499 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.407790 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.410102 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.410213 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.410564 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.410799 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bzr6j"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.410871 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.411111 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.411295 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.411592 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.411617 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.411743 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.412005 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.412879 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.413538 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.413660 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.418132 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.418646 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.418756 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419102 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419212 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419299 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419350 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419138 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hnvnt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419958 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419470 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419506 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.419592 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.421152 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.421165 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.421251 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.421351 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.421378 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.421358 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.421580 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.422337 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.422433 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.422595 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.422708 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.422912 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.422988 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.423057 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.423196 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.423270 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.423338 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.423991 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d156eabe-7e2f-4336-acaa-4af08cfdea8d-auth-proxy-config\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.424075 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.424050 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95464c4e-4616-4ab3-9928-4dc41beee4af-images\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.436263 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.441242 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.441429 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.441817 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.441893 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442011 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44wsn\" (UniqueName: \"kubernetes.io/projected/ff9a491e-b915-4980-92f9-71844bc90a65-kube-api-access-44wsn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442049 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442080 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2635b714-cae2-41c2-8a7b-87075c04e2b3-audit-dir\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442104 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-client-ca\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442138 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-image-import-ca\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442157 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-encryption-config\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442175 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkkx\" (UniqueName: \"kubernetes.io/projected/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-kube-api-access-2lkkx\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442198 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95464c4e-4616-4ab3-9928-4dc41beee4af-config\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-serving-cert\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442242 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-serving-cert\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87r6\" (UniqueName: \"kubernetes.io/projected/2635b714-cae2-41c2-8a7b-87075c04e2b3-kube-api-access-z87r6\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442307 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c94ba48a-8de9-40a2-9c80-d4496b7b8cf6-metrics-tls\") pod \"dns-operator-744455d44c-lglwp\" (UID: \"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6\") " pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442340 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-trusted-ca\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442356 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/95464c4e-4616-4ab3-9928-4dc41beee4af-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442401 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-audit\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442426 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rzq5f\" (UID: \"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442473 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-config\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442532 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-config\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4569n\" (UniqueName: \"kubernetes.io/projected/0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f-kube-api-access-4569n\") pod \"cluster-samples-operator-665b6dd947-rzq5f\" (UID: \"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442610 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a491e-b915-4980-92f9-71844bc90a65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442630 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9mk\" (UniqueName: \"kubernetes.io/projected/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-kube-api-access-pk9mk\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442650 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-etcd-client\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442668 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442691 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d156eabe-7e2f-4336-acaa-4af08cfdea8d-machine-approver-tls\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442731 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5n8f\" (UniqueName: \"kubernetes.io/projected/c94ba48a-8de9-40a2-9c80-d4496b7b8cf6-kube-api-access-l5n8f\") pod \"dns-operator-744455d44c-lglwp\" (UID: \"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6\") " pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442754 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442768 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442774 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-serving-cert\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442792 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2635b714-cae2-41c2-8a7b-87075c04e2b3-node-pullsecrets\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442814 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d156eabe-7e2f-4336-acaa-4af08cfdea8d-config\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442856 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-config\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442896 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9a491e-b915-4980-92f9-71844bc90a65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442916 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgc2\" (UniqueName: \"kubernetes.io/projected/95464c4e-4616-4ab3-9928-4dc41beee4af-kube-api-access-9hgc2\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442935 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b6r\" (UniqueName: \"kubernetes.io/projected/87a6d300-fa67-4762-9025-232fcb2ea96d-kube-api-access-t4b6r\") pod \"downloads-7954f5f757-7hg9r\" (UID: \"87a6d300-fa67-4762-9025-232fcb2ea96d\") " pod="openshift-console/downloads-7954f5f757-7hg9r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.442954 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99dfd\" (UniqueName: \"kubernetes.io/projected/d156eabe-7e2f-4336-acaa-4af08cfdea8d-kube-api-access-99dfd\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.450177 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.451480 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.451733 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.452371 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.453544 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.454082 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gwk4k"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.454153 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.454359 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-45fwt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.454743 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.454928 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.455558 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.455756 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.459791 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.459972 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.460590 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.464366 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7f7vb"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.464882 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.466272 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.466872 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.472510 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.475105 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.475560 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.476563 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.477651 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.477949 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7vdf5"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.479874 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xwp8r"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.484949 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.485615 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.486192 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.486702 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.487469 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.493705 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.494303 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.495541 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.496076 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.496610 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q52hx"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.497107 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.498923 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lglwp"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.499612 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.502243 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.502728 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wffdm"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.502796 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.503735 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.504581 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.504754 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-c8v2t"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.505377 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.509211 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.510317 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tdwkr"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.511340 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.514309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.514411 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.517727 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.518929 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.520796 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qpsmt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.521625 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.522559 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.523900 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tsbrh"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.525794 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.526275 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.528913 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.530256 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.530621 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.530869 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.532047 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sn5vq"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.533164 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.534323 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tdwkr"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.535356 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nfkmp"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.536738 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.538033 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7hg9r"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.539105 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.540127 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.541163 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.542116 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bzr6j"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543294 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c8v2t"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d156eabe-7e2f-4336-acaa-4af08cfdea8d-config\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543559 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-ca\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7600967f-215f-411c-b19a-8433e7a266ee-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543600 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-config\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543617 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51a36f59-fabf-4d64-bd21-c294a70460cd-config\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543647 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9a491e-b915-4980-92f9-71844bc90a65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b6r\" (UniqueName: \"kubernetes.io/projected/87a6d300-fa67-4762-9025-232fcb2ea96d-kube-api-access-t4b6r\") pod \"downloads-7954f5f757-7hg9r\" (UID: \"87a6d300-fa67-4762-9025-232fcb2ea96d\") " pod="openshift-console/downloads-7954f5f757-7hg9r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543682 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99dfd\" (UniqueName: \"kubernetes.io/projected/d156eabe-7e2f-4336-acaa-4af08cfdea8d-kube-api-access-99dfd\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543720 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscmr\" (UniqueName: \"kubernetes.io/projected/26893a84-2d41-4266-80db-235e4057a14f-kube-api-access-pscmr\") pod \"migrator-59844c95c7-mzr52\" (UID: \"26893a84-2d41-4266-80db-235e4057a14f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543923 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.543977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hgc2\" (UniqueName: \"kubernetes.io/projected/95464c4e-4616-4ab3-9928-4dc41beee4af-kube-api-access-9hgc2\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544022 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a36f59-fabf-4d64-bd21-c294a70460cd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544051 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544084 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-config\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544108 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d156eabe-7e2f-4336-acaa-4af08cfdea8d-auth-proxy-config\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-oauth-serving-cert\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d156eabe-7e2f-4336-acaa-4af08cfdea8d-config\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544173 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95464c4e-4616-4ab3-9928-4dc41beee4af-images\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544200 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb094bc3-3ea8-4a2b-9f41-61621ba47667-serving-cert\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544222 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-client\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544240 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv72h\" (UniqueName: \"kubernetes.io/projected/bb094bc3-3ea8-4a2b-9f41-61621ba47667-kube-api-access-qv72h\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544248 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544546 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-service-ca\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544576 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-audit-policies\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544598 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4th\" (UniqueName: \"kubernetes.io/projected/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-kube-api-access-gv4th\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544642 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544699 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44wsn\" (UniqueName: \"kubernetes.io/projected/ff9a491e-b915-4980-92f9-71844bc90a65-kube-api-access-44wsn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544730 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-config\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544905 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d156eabe-7e2f-4336-acaa-4af08cfdea8d-auth-proxy-config\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544933 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51a36f59-fabf-4d64-bd21-c294a70460cd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544985 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2635b714-cae2-41c2-8a7b-87075c04e2b3-audit-dir\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.544993 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95464c4e-4616-4ab3-9928-4dc41beee4af-images\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545031 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-client-ca\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545063 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2635b714-cae2-41c2-8a7b-87075c04e2b3-audit-dir\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545065 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545119 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-image-import-ca\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-encryption-config\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545169 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lkkx\" (UniqueName: \"kubernetes.io/projected/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-kube-api-access-2lkkx\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545219 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95464c4e-4616-4ab3-9928-4dc41beee4af-config\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545242 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545269 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-serving-cert\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545289 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jfq8\" (UniqueName: \"kubernetes.io/projected/7600967f-215f-411c-b19a-8433e7a266ee-kube-api-access-2jfq8\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545311 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bc064-a334-47bd-820e-00ced1c89025-audit-dir\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545340 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-serving-cert\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-serving-cert\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545388 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-oauth-config\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545412 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87r6\" (UniqueName: \"kubernetes.io/projected/2635b714-cae2-41c2-8a7b-87075c04e2b3-kube-api-access-z87r6\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545454 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c94ba48a-8de9-40a2-9c80-d4496b7b8cf6-metrics-tls\") pod \"dns-operator-744455d44c-lglwp\" (UID: \"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6\") " pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545476 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545516 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-trusted-ca-bundle\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545536 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545558 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/95464c4e-4616-4ab3-9928-4dc41beee4af-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-trusted-ca\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545596 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-client-ca\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545601 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-audit\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545638 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rzq5f\" (UID: \"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545661 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545689 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-config\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545706 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2svw\" (UniqueName: \"kubernetes.io/projected/fa4bc064-a334-47bd-820e-00ced1c89025-kube-api-access-v2svw\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545727 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545745 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-config\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545759 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4569n\" (UniqueName: \"kubernetes.io/projected/0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f-kube-api-access-4569n\") pod \"cluster-samples-operator-665b6dd947-rzq5f\" (UID: \"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545762 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545777 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-config\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a491e-b915-4980-92f9-71844bc90a65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9mk\" (UniqueName: \"kubernetes.io/projected/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-kube-api-access-pk9mk\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545869 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-service-ca\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545887 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8xb95\" (UID: \"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545904 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-config\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545926 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.545944 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d156eabe-7e2f-4336-acaa-4af08cfdea8d-machine-approver-tls\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.546367 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7600967f-215f-411c-b19a-8433e7a266ee-serving-cert\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.546417 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-etcd-client\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.546460 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5n8f\" (UniqueName: \"kubernetes.io/projected/c94ba48a-8de9-40a2-9c80-d4496b7b8cf6-kube-api-access-l5n8f\") pod \"dns-operator-744455d44c-lglwp\" (UID: \"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6\") " pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.546473 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-config\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.546497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6fzs\" (UniqueName: \"kubernetes.io/projected/6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e-kube-api-access-q6fzs\") pod \"control-plane-machine-set-operator-78cbb6b69f-8xb95\" (UID: \"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.546522 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.547094 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a491e-b915-4980-92f9-71844bc90a65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.547321 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95464c4e-4616-4ab3-9928-4dc41beee4af-config\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.547943 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.548037 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-serving-cert\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.548063 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.548082 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2635b714-cae2-41c2-8a7b-87075c04e2b3-node-pullsecrets\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.548481 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-config\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.548539 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hnvnt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.548685 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2635b714-cae2-41c2-8a7b-87075c04e2b3-node-pullsecrets\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.549399 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jh7tn"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.549460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.549817 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9a491e-b915-4980-92f9-71844bc90a65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.550489 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-audit\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.550545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2635b714-cae2-41c2-8a7b-87075c04e2b3-image-import-ca\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.550554 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/95464c4e-4616-4ab3-9928-4dc41beee4af-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.550578 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.551142 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c94ba48a-8de9-40a2-9c80-d4496b7b8cf6-metrics-tls\") pod \"dns-operator-744455d44c-lglwp\" (UID: \"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6\") " pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.551593 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d156eabe-7e2f-4336-acaa-4af08cfdea8d-machine-approver-tls\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.552107 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.552150 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.552246 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.552822 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5qvjc"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.553152 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-serving-cert\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.553299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-serving-cert\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.553709 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-etcd-client\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.553837 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rzq5f\" (UID: \"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.553909 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.554245 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.554889 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-trusted-ca\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.555873 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-serving-cert\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.555911 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-45fwt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.556791 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2635b714-cae2-41c2-8a7b-87075c04e2b3-encryption-config\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.556995 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.557958 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.559656 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.559917 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jh7tn"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.561694 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.561813 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5qvjc"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.565647 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mcr5m"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.566412 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.567135 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wffdm"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.568147 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q52hx"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.569318 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.569368 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qpsmt"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.570410 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.571416 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c"] Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.589682 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.610160 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.630692 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649429 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649509 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-serving-cert\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649545 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jfq8\" (UniqueName: \"kubernetes.io/projected/7600967f-215f-411c-b19a-8433e7a266ee-kube-api-access-2jfq8\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bc064-a334-47bd-820e-00ced1c89025-audit-dir\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649621 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-oauth-config\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649658 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649819 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-trusted-ca-bundle\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649856 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649904 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2svw\" (UniqueName: \"kubernetes.io/projected/fa4bc064-a334-47bd-820e-00ced1c89025-kube-api-access-v2svw\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.649999 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650042 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-config\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-service-ca\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650532 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8xb95\" (UID: \"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650647 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-config\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7600967f-215f-411c-b19a-8433e7a266ee-serving-cert\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650955 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.650991 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6fzs\" (UniqueName: \"kubernetes.io/projected/6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e-kube-api-access-q6fzs\") pod \"control-plane-machine-set-operator-78cbb6b69f-8xb95\" (UID: \"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651024 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7600967f-215f-411c-b19a-8433e7a266ee-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651050 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-ca\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651076 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51a36f59-fabf-4d64-bd21-c294a70460cd-config\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pscmr\" (UniqueName: \"kubernetes.io/projected/26893a84-2d41-4266-80db-235e4057a14f-kube-api-access-pscmr\") pod \"migrator-59844c95c7-mzr52\" (UID: \"26893a84-2d41-4266-80db-235e4057a14f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651293 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-config\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-oauth-serving-cert\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651396 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a36f59-fabf-4d64-bd21-c294a70460cd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb094bc3-3ea8-4a2b-9f41-61621ba47667-serving-cert\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651486 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-client\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651515 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv72h\" (UniqueName: \"kubernetes.io/projected/bb094bc3-3ea8-4a2b-9f41-61621ba47667-kube-api-access-qv72h\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651552 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-audit-policies\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651735 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651797 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-service-ca\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651894 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651935 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv4th\" (UniqueName: \"kubernetes.io/projected/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-kube-api-access-gv4th\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651980 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51a36f59-fabf-4d64-bd21-c294a70460cd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.652008 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.651469 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bc064-a334-47bd-820e-00ced1c89025-audit-dir\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.653403 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.653716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51a36f59-fabf-4d64-bd21-c294a70460cd-config\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.653729 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.653921 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-audit-policies\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.654223 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-oauth-serving-cert\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.655457 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7600967f-215f-411c-b19a-8433e7a266ee-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.655852 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-service-ca\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.656017 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.656280 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-config\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.656563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.656706 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-oauth-config\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.656782 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-trusted-ca-bundle\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.657612 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7600967f-215f-411c-b19a-8433e7a266ee-serving-cert\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.657807 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.658195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a36f59-fabf-4d64-bd21-c294a70460cd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.658919 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.659415 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.660378 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.660435 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.661243 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.661923 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.663953 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.671204 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.673470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-serving-cert\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.709719 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.730539 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.750354 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.770311 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.790686 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.810537 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.814736 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-service-ca\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.830665 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.838935 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-client\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.851643 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.869837 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.880962 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb094bc3-3ea8-4a2b-9f41-61621ba47667-serving-cert\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.889937 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.910972 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.931920 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.934195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-config\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.950708 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.958987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb094bc3-3ea8-4a2b-9f41-61621ba47667-etcd-ca\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.971032 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 23 14:47:31 crc kubenswrapper[4718]: I1123 14:47:31.991395 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.011036 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.041915 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.050554 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.070575 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.091291 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.111201 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.131200 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.150386 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.170710 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.191882 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.211215 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.221345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8xb95\" (UID: \"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.231081 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.251146 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.271430 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.278624 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.291833 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.310599 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.314768 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-config\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.330823 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.351076 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.370617 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.390607 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.410129 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.430904 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.440764 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.440883 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.441391 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.450302 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.470507 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.489259 4718 request.go:700] Waited for 1.002363757s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.491759 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.511454 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.530500 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.550412 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.590590 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.610299 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.631309 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.651307 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.671171 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.691654 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.711543 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.730805 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.750610 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.770125 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.790193 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.811357 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.831263 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.851035 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.870194 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.890795 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.911052 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.931383 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.951412 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.970521 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 23 14:47:32 crc kubenswrapper[4718]: I1123 14:47:32.990995 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.010972 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.031094 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.051775 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.071427 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.090716 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.110584 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.130219 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.151334 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.170808 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.190038 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.220085 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.230990 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.277422 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99dfd\" (UniqueName: \"kubernetes.io/projected/d156eabe-7e2f-4336-acaa-4af08cfdea8d-kube-api-access-99dfd\") pod \"machine-approver-56656f9798-wnl8l\" (UID: \"d156eabe-7e2f-4336-acaa-4af08cfdea8d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.295381 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b6r\" (UniqueName: \"kubernetes.io/projected/87a6d300-fa67-4762-9025-232fcb2ea96d-kube-api-access-t4b6r\") pod \"downloads-7954f5f757-7hg9r\" (UID: \"87a6d300-fa67-4762-9025-232fcb2ea96d\") " pod="openshift-console/downloads-7954f5f757-7hg9r" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.318061 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hgc2\" (UniqueName: \"kubernetes.io/projected/95464c4e-4616-4ab3-9928-4dc41beee4af-kube-api-access-9hgc2\") pod \"machine-api-operator-5694c8668f-7vdf5\" (UID: \"95464c4e-4616-4ab3-9928-4dc41beee4af\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.328859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44wsn\" (UniqueName: \"kubernetes.io/projected/ff9a491e-b915-4980-92f9-71844bc90a65-kube-api-access-44wsn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8m5b\" (UID: \"ff9a491e-b915-4980-92f9-71844bc90a65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.346394 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lkkx\" (UniqueName: \"kubernetes.io/projected/5607cd92-eff9-4eff-8dc7-4c0999b88fd4-kube-api-access-2lkkx\") pod \"console-operator-58897d9998-nfkmp\" (UID: \"5607cd92-eff9-4eff-8dc7-4c0999b88fd4\") " pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.368652 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5n8f\" (UniqueName: \"kubernetes.io/projected/c94ba48a-8de9-40a2-9c80-d4496b7b8cf6-kube-api-access-l5n8f\") pod \"dns-operator-744455d44c-lglwp\" (UID: \"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6\") " pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.386907 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87r6\" (UniqueName: \"kubernetes.io/projected/2635b714-cae2-41c2-8a7b-87075c04e2b3-kube-api-access-z87r6\") pod \"apiserver-76f77b778f-gwk4k\" (UID: \"2635b714-cae2-41c2-8a7b-87075c04e2b3\") " pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.407554 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4569n\" (UniqueName: \"kubernetes.io/projected/0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f-kube-api-access-4569n\") pod \"cluster-samples-operator-665b6dd947-rzq5f\" (UID: \"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.431140 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.437412 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9mk\" (UniqueName: \"kubernetes.io/projected/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-kube-api-access-pk9mk\") pod \"controller-manager-879f6c89f-xwp8r\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.450966 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.454710 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.470345 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.491816 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.502937 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.509197 4718 request.go:700] Waited for 1.955102407s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.511060 4718 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.511090 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.522329 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.531274 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.558222 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.570311 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.571126 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.574890 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.581136 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7hg9r" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.588086 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.591226 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.596747 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.635855 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jxnph\" (UID: \"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.654080 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jfq8\" (UniqueName: \"kubernetes.io/projected/7600967f-215f-411c-b19a-8433e7a266ee-kube-api-access-2jfq8\") pod \"openshift-config-operator-7777fb866f-mlwtl\" (UID: \"7600967f-215f-411c-b19a-8433e7a266ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.665893 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pscmr\" (UniqueName: \"kubernetes.io/projected/26893a84-2d41-4266-80db-235e4057a14f-kube-api-access-pscmr\") pod \"migrator-59844c95c7-mzr52\" (UID: \"26893a84-2d41-4266-80db-235e4057a14f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.693656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv72h\" (UniqueName: \"kubernetes.io/projected/bb094bc3-3ea8-4a2b-9f41-61621ba47667-kube-api-access-qv72h\") pod \"etcd-operator-b45778765-45fwt\" (UID: \"bb094bc3-3ea8-4a2b-9f41-61621ba47667\") " pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.709355 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv4th\" (UniqueName: \"kubernetes.io/projected/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-kube-api-access-gv4th\") pod \"console-f9d7485db-bzr6j\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.710166 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.717912 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.726516 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f"] Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.728865 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51a36f59-fabf-4d64-bd21-c294a70460cd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8cmz\" (UID: \"51a36f59-fabf-4d64-bd21-c294a70460cd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.741400 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.746408 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2svw\" (UniqueName: \"kubernetes.io/projected/fa4bc064-a334-47bd-820e-00ced1c89025-kube-api-access-v2svw\") pod \"oauth-openshift-558db77b4-hnvnt\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.773710 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6fzs\" (UniqueName: \"kubernetes.io/projected/6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e-kube-api-access-q6fzs\") pod \"control-plane-machine-set-operator-78cbb6b69f-8xb95\" (UID: \"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.791179 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.810182 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.830191 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.849856 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.856097 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.884956 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb2788e1-4aae-4991-be1e-78f77e0e0811-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885018 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/108ad2a6-0176-40d5-9252-577047cea58d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885055 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/108ad2a6-0176-40d5-9252-577047cea58d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885094 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55xk\" (UniqueName: \"kubernetes.io/projected/828f7457-8fd5-4c12-9f95-1dd938faacb0-kube-api-access-z55xk\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885134 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee1fe20-1606-4835-99b5-7cd564600ea7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885164 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgq8\" (UniqueName: \"kubernetes.io/projected/205d682b-7592-4b53-bcf6-0300c1084046-kube-api-access-dsgq8\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885229 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/205d682b-7592-4b53-bcf6-0300c1084046-serving-cert\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885295 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-metrics-certs\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885334 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885374 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pczmz\" (UniqueName: \"kubernetes.io/projected/4281782c-70f4-442f-814b-2e60ea9dae88-kube-api-access-pczmz\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885422 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd397098-2850-4c0e-aa5b-5147532da7a5-audit-dir\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885482 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76924\" (UniqueName: \"kubernetes.io/projected/cb2788e1-4aae-4991-be1e-78f77e0e0811-kube-api-access-76924\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-client-ca\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885546 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-stats-auth\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885626 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56de4d78-445d-4e86-80eb-6096912ef506-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885662 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnsk\" (UniqueName: \"kubernetes.io/projected/77ec520a-813e-497b-97bb-c0271026540b-kube-api-access-9bnsk\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885716 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24a77fa6-f458-4337-bafc-20b5268bc357-images\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:33 crc kubenswrapper[4718]: E1123 14:47:33.885926 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.385908641 +0000 UTC m=+105.625528605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.885977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-registry-certificates\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886011 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb2788e1-4aae-4991-be1e-78f77e0e0811-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886035 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsn77\" (UniqueName: \"kubernetes.io/projected/24a77fa6-f458-4337-bafc-20b5268bc357-kube-api-access-qsn77\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886055 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd77m\" (UniqueName: \"kubernetes.io/projected/56de4d78-445d-4e86-80eb-6096912ef506-kube-api-access-fd77m\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xh9\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-kube-api-access-f6xh9\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886167 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a631d45-c084-4027-98c1-2d0faf43d7bc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886243 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-config\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886502 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4281782c-70f4-442f-814b-2e60ea9dae88-service-ca-bundle\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee1fe20-1606-4835-99b5-7cd564600ea7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886626 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ec520a-813e-497b-97bb-c0271026540b-serving-cert\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/828f7457-8fd5-4c12-9f95-1dd938faacb0-trusted-ca\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886785 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjtz\" (UniqueName: \"kubernetes.io/projected/cd397098-2850-4c0e-aa5b-5147532da7a5-kube-api-access-kwjtz\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886855 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-bound-sa-token\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886892 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-default-certificate\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.886928 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-trusted-ca\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56de4d78-445d-4e86-80eb-6096912ef506-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887090 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/828f7457-8fd5-4c12-9f95-1dd938faacb0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887128 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a631d45-c084-4027-98c1-2d0faf43d7bc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887178 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-audit-policies\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887209 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb2788e1-4aae-4991-be1e-78f77e0e0811-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887240 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/828f7457-8fd5-4c12-9f95-1dd938faacb0-metrics-tls\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887278 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ee1fe20-1606-4835-99b5-7cd564600ea7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887312 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-encryption-config\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887379 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887426 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-service-ca-bundle\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887528 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24a77fa6-f458-4337-bafc-20b5268bc357-proxy-tls\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887571 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-serving-cert\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887650 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-etcd-client\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887703 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-registry-tls\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887757 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24a77fa6-f458-4337-bafc-20b5268bc357-auth-proxy-config\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887805 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbqz\" (UniqueName: \"kubernetes.io/projected/9a631d45-c084-4027-98c1-2d0faf43d7bc-kube-api-access-vqbqz\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.887843 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-config\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.901034 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.949349 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.989544 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.989898 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqbqz\" (UniqueName: \"kubernetes.io/projected/9a631d45-c084-4027-98c1-2d0faf43d7bc-kube-api-access-vqbqz\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.989973 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fddd9483-5921-4244-af22-39499cfcd168-apiservice-cert\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990010 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8p8f\" (UniqueName: \"kubernetes.io/projected/5a619d49-cea1-437d-8b81-6947b3085376-kube-api-access-b8p8f\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-config\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990120 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wh2p\" (UniqueName: \"kubernetes.io/projected/fddd9483-5921-4244-af22-39499cfcd168-kube-api-access-7wh2p\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990149 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/844845db-0b6a-4365-aa6e-760f4dd1157d-proxy-tls\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb2788e1-4aae-4991-be1e-78f77e0e0811-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990234 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/108ad2a6-0176-40d5-9252-577047cea58d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990285 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/108ad2a6-0176-40d5-9252-577047cea58d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtm4\" (UniqueName: \"kubernetes.io/projected/002000ac-5a96-4bb1-985a-7863f4c9e05a-kube-api-access-qhtm4\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990346 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl6hg\" (UniqueName: \"kubernetes.io/projected/fbbd315e-478d-48fe-8df6-a58a288adba0-kube-api-access-rl6hg\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990395 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55xk\" (UniqueName: \"kubernetes.io/projected/828f7457-8fd5-4c12-9f95-1dd938faacb0-kube-api-access-z55xk\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990464 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-plugins-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990562 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee1fe20-1606-4835-99b5-7cd564600ea7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990603 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgq8\" (UniqueName: \"kubernetes.io/projected/205d682b-7592-4b53-bcf6-0300c1084046-kube-api-access-dsgq8\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/205d682b-7592-4b53-bcf6-0300c1084046-serving-cert\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990833 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqrz\" (UniqueName: \"kubernetes.io/projected/00ada162-924a-42f2-85b2-62e5df70027d-kube-api-access-gkqrz\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990874 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-metrics-certs\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990942 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990971 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pczmz\" (UniqueName: \"kubernetes.io/projected/4281782c-70f4-442f-814b-2e60ea9dae88-kube-api-access-pczmz\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.990996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd397098-2850-4c0e-aa5b-5147532da7a5-audit-dir\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991045 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76924\" (UniqueName: \"kubernetes.io/projected/cb2788e1-4aae-4991-be1e-78f77e0e0811-kube-api-access-76924\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991071 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-client-ca\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991101 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-stats-auth\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991169 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56de4d78-445d-4e86-80eb-6096912ef506-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991237 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bnsk\" (UniqueName: \"kubernetes.io/projected/77ec520a-813e-497b-97bb-c0271026540b-kube-api-access-9bnsk\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991266 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/108ad2a6-0176-40d5-9252-577047cea58d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991285 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5a619d49-cea1-437d-8b81-6947b3085376-signing-key\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991312 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/00ada162-924a-42f2-85b2-62e5df70027d-srv-cert\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991379 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69074044-d0c1-4f42-bd99-bce808257377-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czc9t\" (UID: \"69074044-d0c1-4f42-bd99-bce808257377\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fddd9483-5921-4244-af22-39499cfcd168-tmpfs\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991559 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24a77fa6-f458-4337-bafc-20b5268bc357-images\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991590 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz45x\" (UniqueName: \"kubernetes.io/projected/69074044-d0c1-4f42-bd99-bce808257377-kube-api-access-hz45x\") pod \"package-server-manager-789f6589d5-czc9t\" (UID: \"69074044-d0c1-4f42-bd99-bce808257377\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-registry-certificates\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991649 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb2788e1-4aae-4991-be1e-78f77e0e0811-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsn77\" (UniqueName: \"kubernetes.io/projected/24a77fa6-f458-4337-bafc-20b5268bc357-kube-api-access-qsn77\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991708 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd77m\" (UniqueName: \"kubernetes.io/projected/56de4d78-445d-4e86-80eb-6096912ef506-kube-api-access-fd77m\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991739 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-secret-volume\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991762 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fddd9483-5921-4244-af22-39499cfcd168-webhook-cert\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991789 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xh9\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-kube-api-access-f6xh9\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991850 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a631d45-c084-4027-98c1-2d0faf43d7bc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991883 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-config\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991919 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4281782c-70f4-442f-814b-2e60ea9dae88-service-ca-bundle\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.991948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/002000ac-5a96-4bb1-985a-7863f4c9e05a-certs\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.992015 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee1fe20-1606-4835-99b5-7cd564600ea7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.992046 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.992097 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ec520a-813e-497b-97bb-c0271026540b-serving-cert\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.992125 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5a619d49-cea1-437d-8b81-6947b3085376-signing-cabundle\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:33 crc kubenswrapper[4718]: E1123 14:47:33.992174 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.492148163 +0000 UTC m=+105.731768017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.993595 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-client-ca\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.994146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-registry-certificates\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.994251 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd397098-2850-4c0e-aa5b-5147532da7a5-audit-dir\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.994485 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee1fe20-1606-4835-99b5-7cd564600ea7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.996750 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/828f7457-8fd5-4c12-9f95-1dd938faacb0-trusted-ca\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.996837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mkr\" (UniqueName: \"kubernetes.io/projected/af45cb6f-fed7-415b-bf75-e742a875a1c0-kube-api-access-h7mkr\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.996904 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjtz\" (UniqueName: \"kubernetes.io/projected/cd397098-2850-4c0e-aa5b-5147532da7a5-kube-api-access-kwjtz\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997036 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6b18826-96e9-4e47-9f95-3f19f298683b-profile-collector-cert\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997226 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-bound-sa-token\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-default-certificate\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997298 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fbbd315e-478d-48fe-8df6-a58a288adba0-metrics-tls\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3751d59c-0c3d-4000-b034-7b33039c7930-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q52hx\" (UID: \"3751d59c-0c3d-4000-b034-7b33039c7930\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997390 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-trusted-ca\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997470 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56de4d78-445d-4e86-80eb-6096912ef506-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:33 crc kubenswrapper[4718]: I1123 14:47:33.997544 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4281782c-70f4-442f-814b-2e60ea9dae88-service-ca-bundle\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.003985 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.004188 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-config\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.004837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-mountpoint-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.004884 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/828f7457-8fd5-4c12-9f95-1dd938faacb0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.005472 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56de4d78-445d-4e86-80eb-6096912ef506-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.005816 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-trusted-ca\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.006541 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a631d45-c084-4027-98c1-2d0faf43d7bc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.006599 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2tfr\" (UniqueName: \"kubernetes.io/projected/bde0d1cd-9913-424d-886d-720a6d29cab8-kube-api-access-x2tfr\") pod \"ingress-canary-c8v2t\" (UID: \"bde0d1cd-9913-424d-886d-720a6d29cab8\") " pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.006767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24a77fa6-f458-4337-bafc-20b5268bc357-images\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.006816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-socket-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.006907 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6b18826-96e9-4e47-9f95-3f19f298683b-srv-cert\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.006939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb2788e1-4aae-4991-be1e-78f77e0e0811-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.006964 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-audit-policies\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.007016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-serving-cert\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.007039 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88pg\" (UniqueName: \"kubernetes.io/projected/3751d59c-0c3d-4000-b034-7b33039c7930-kube-api-access-q88pg\") pod \"multus-admission-controller-857f4d67dd-q52hx\" (UID: \"3751d59c-0c3d-4000-b034-7b33039c7930\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.007059 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-csi-data-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.007083 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a631d45-c084-4027-98c1-2d0faf43d7bc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.008429 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/828f7457-8fd5-4c12-9f95-1dd938faacb0-metrics-tls\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.008472 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/002000ac-5a96-4bb1-985a-7863f4c9e05a-node-bootstrap-token\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.008506 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgk65\" (UniqueName: \"kubernetes.io/projected/b6b18826-96e9-4e47-9f95-3f19f298683b-kube-api-access-kgk65\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.008526 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.008546 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/844845db-0b6a-4365-aa6e-760f4dd1157d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.009393 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb2788e1-4aae-4991-be1e-78f77e0e0811-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.010669 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a631d45-c084-4027-98c1-2d0faf43d7bc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.010685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbd315e-478d-48fe-8df6-a58a288adba0-config-volume\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.010894 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/00ada162-924a-42f2-85b2-62e5df70027d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.011129 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqgc\" (UniqueName: \"kubernetes.io/projected/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-kube-api-access-lrqgc\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.011287 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ee1fe20-1606-4835-99b5-7cd564600ea7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.011369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-encryption-config\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.011538 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6swh4\" (UniqueName: \"kubernetes.io/projected/bf1008cf-6837-4089-ae38-2e44add1cfa5-kube-api-access-6swh4\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.011779 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.011823 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-config\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.012726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-config\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.013423 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-service-ca-bundle\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.013512 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rbj\" (UniqueName: \"kubernetes.io/projected/844845db-0b6a-4365-aa6e-760f4dd1157d-kube-api-access-66rbj\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.013642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.015345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.015478 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-metrics-certs\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.017626 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/828f7457-8fd5-4c12-9f95-1dd938faacb0-trusted-ca\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.017890 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/205d682b-7592-4b53-bcf6-0300c1084046-serving-cert\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.017897 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-default-certificate\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.018140 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ec520a-813e-497b-97bb-c0271026540b-serving-cert\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.018390 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56de4d78-445d-4e86-80eb-6096912ef506-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.018457 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-audit-policies\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.018521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-registration-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.018887 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24a77fa6-f458-4337-bafc-20b5268bc357-proxy-tls\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.018944 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-serving-cert\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.018992 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-config-volume\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.019028 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-etcd-client\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.019114 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bde0d1cd-9913-424d-886d-720a6d29cab8-cert\") pod \"ingress-canary-c8v2t\" (UID: \"bde0d1cd-9913-424d-886d-720a6d29cab8\") " pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.019167 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ec520a-813e-497b-97bb-c0271026540b-service-ca-bundle\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.019163 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4lk\" (UniqueName: \"kubernetes.io/projected/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-kube-api-access-gr4lk\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.019892 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee1fe20-1606-4835-99b5-7cd564600ea7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.019918 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cd397098-2850-4c0e-aa5b-5147532da7a5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.021013 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-registry-tls\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.021324 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24a77fa6-f458-4337-bafc-20b5268bc357-auth-proxy-config\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.021890 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24a77fa6-f458-4337-bafc-20b5268bc357-auth-proxy-config\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.022607 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/828f7457-8fd5-4c12-9f95-1dd938faacb0-metrics-tls\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.023395 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/108ad2a6-0176-40d5-9252-577047cea58d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.024348 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb2788e1-4aae-4991-be1e-78f77e0e0811-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.024466 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24a77fa6-f458-4337-bafc-20b5268bc357-proxy-tls\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.026039 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-etcd-client\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.027184 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-serving-cert\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.027807 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-registry-tls\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.028683 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4281782c-70f4-442f-814b-2e60ea9dae88-stats-auth\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.029766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cd397098-2850-4c0e-aa5b-5147532da7a5-encryption-config\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.031222 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb2788e1-4aae-4991-be1e-78f77e0e0811-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.036399 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.055027 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7vdf5"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.056165 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gwk4k"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.061059 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqbqz\" (UniqueName: \"kubernetes.io/projected/9a631d45-c084-4027-98c1-2d0faf43d7bc-kube-api-access-vqbqz\") pod \"kube-storage-version-migrator-operator-b67b599dd-vl2sm\" (UID: \"9a631d45-c084-4027-98c1-2d0faf43d7bc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:34 crc kubenswrapper[4718]: W1123 14:47:34.068654 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95464c4e_4616_4ab3_9928_4dc41beee4af.slice/crio-878089f2f0f52555a2f62a82959a7c1ea31f3d65db0e14e3642480924af4de58 WatchSource:0}: Error finding container 878089f2f0f52555a2f62a82959a7c1ea31f3d65db0e14e3642480924af4de58: Status 404 returned error can't find the container with id 878089f2f0f52555a2f62a82959a7c1ea31f3d65db0e14e3642480924af4de58 Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.074037 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xh9\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-kube-api-access-f6xh9\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.090163 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7hg9r"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.090603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd77m\" (UniqueName: \"kubernetes.io/projected/56de4d78-445d-4e86-80eb-6096912ef506-kube-api-access-fd77m\") pod \"openshift-apiserver-operator-796bbdcf4f-2mskq\" (UID: \"56de4d78-445d-4e86-80eb-6096912ef506\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.119650 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsn77\" (UniqueName: \"kubernetes.io/projected/24a77fa6-f458-4337-bafc-20b5268bc357-kube-api-access-qsn77\") pod \"machine-config-operator-74547568cd-crvv6\" (UID: \"24a77fa6-f458-4337-bafc-20b5268bc357\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.123957 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fddd9483-5921-4244-af22-39499cfcd168-apiservice-cert\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.123997 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8p8f\" (UniqueName: \"kubernetes.io/projected/5a619d49-cea1-437d-8b81-6947b3085376-kube-api-access-b8p8f\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124021 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wh2p\" (UniqueName: \"kubernetes.io/projected/fddd9483-5921-4244-af22-39499cfcd168-kube-api-access-7wh2p\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124047 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/844845db-0b6a-4365-aa6e-760f4dd1157d-proxy-tls\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124066 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhtm4\" (UniqueName: \"kubernetes.io/projected/002000ac-5a96-4bb1-985a-7863f4c9e05a-kube-api-access-qhtm4\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124081 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl6hg\" (UniqueName: \"kubernetes.io/projected/fbbd315e-478d-48fe-8df6-a58a288adba0-kube-api-access-rl6hg\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124106 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-plugins-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124145 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkqrz\" (UniqueName: \"kubernetes.io/projected/00ada162-924a-42f2-85b2-62e5df70027d-kube-api-access-gkqrz\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124177 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124218 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5a619d49-cea1-437d-8b81-6947b3085376-signing-key\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124233 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/00ada162-924a-42f2-85b2-62e5df70027d-srv-cert\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124254 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69074044-d0c1-4f42-bd99-bce808257377-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czc9t\" (UID: \"69074044-d0c1-4f42-bd99-bce808257377\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fddd9483-5921-4244-af22-39499cfcd168-tmpfs\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124293 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz45x\" (UniqueName: \"kubernetes.io/projected/69074044-d0c1-4f42-bd99-bce808257377-kube-api-access-hz45x\") pod \"package-server-manager-789f6589d5-czc9t\" (UID: \"69074044-d0c1-4f42-bd99-bce808257377\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-secret-volume\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fddd9483-5921-4244-af22-39499cfcd168-webhook-cert\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124376 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/002000ac-5a96-4bb1-985a-7863f4c9e05a-certs\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124398 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5a619d49-cea1-437d-8b81-6947b3085376-signing-cabundle\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124414 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mkr\" (UniqueName: \"kubernetes.io/projected/af45cb6f-fed7-415b-bf75-e742a875a1c0-kube-api-access-h7mkr\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124450 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6b18826-96e9-4e47-9f95-3f19f298683b-profile-collector-cert\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124471 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fbbd315e-478d-48fe-8df6-a58a288adba0-metrics-tls\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124487 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3751d59c-0c3d-4000-b034-7b33039c7930-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q52hx\" (UID: \"3751d59c-0c3d-4000-b034-7b33039c7930\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124506 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-mountpoint-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124529 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2tfr\" (UniqueName: \"kubernetes.io/projected/bde0d1cd-9913-424d-886d-720a6d29cab8-kube-api-access-x2tfr\") pod \"ingress-canary-c8v2t\" (UID: \"bde0d1cd-9913-424d-886d-720a6d29cab8\") " pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124546 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-socket-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124568 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6b18826-96e9-4e47-9f95-3f19f298683b-srv-cert\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124584 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-serving-cert\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124600 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q88pg\" (UniqueName: \"kubernetes.io/projected/3751d59c-0c3d-4000-b034-7b33039c7930-kube-api-access-q88pg\") pod \"multus-admission-controller-857f4d67dd-q52hx\" (UID: \"3751d59c-0c3d-4000-b034-7b33039c7930\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-csi-data-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124632 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/002000ac-5a96-4bb1-985a-7863f4c9e05a-node-bootstrap-token\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124648 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgk65\" (UniqueName: \"kubernetes.io/projected/b6b18826-96e9-4e47-9f95-3f19f298683b-kube-api-access-kgk65\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124669 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124685 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/844845db-0b6a-4365-aa6e-760f4dd1157d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124702 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbd315e-478d-48fe-8df6-a58a288adba0-config-volume\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/00ada162-924a-42f2-85b2-62e5df70027d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrqgc\" (UniqueName: \"kubernetes.io/projected/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-kube-api-access-lrqgc\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124761 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6swh4\" (UniqueName: \"kubernetes.io/projected/bf1008cf-6837-4089-ae38-2e44add1cfa5-kube-api-access-6swh4\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124779 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-config\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124800 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rbj\" (UniqueName: \"kubernetes.io/projected/844845db-0b6a-4365-aa6e-760f4dd1157d-kube-api-access-66rbj\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124823 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-registration-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124842 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-config-volume\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124858 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bde0d1cd-9913-424d-886d-720a6d29cab8-cert\") pod \"ingress-canary-c8v2t\" (UID: \"bde0d1cd-9913-424d-886d-720a6d29cab8\") " pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.124872 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr4lk\" (UniqueName: \"kubernetes.io/projected/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-kube-api-access-gr4lk\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.125269 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-plugins-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.125581 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.625570565 +0000 UTC m=+105.865190409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.130242 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-config\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.130331 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-registration-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.131029 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-config-volume\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.132544 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbd315e-478d-48fe-8df6-a58a288adba0-config-volume\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.132580 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fddd9483-5921-4244-af22-39499cfcd168-tmpfs\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.132592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/844845db-0b6a-4365-aa6e-760f4dd1157d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.133201 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/00ada162-924a-42f2-85b2-62e5df70027d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.133474 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5a619d49-cea1-437d-8b81-6947b3085376-signing-cabundle\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.133691 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-mountpoint-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.133789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-csi-data-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.133849 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bnsk\" (UniqueName: \"kubernetes.io/projected/77ec520a-813e-497b-97bb-c0271026540b-kube-api-access-9bnsk\") pod \"authentication-operator-69f744f599-sn5vq\" (UID: \"77ec520a-813e-497b-97bb-c0271026540b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.133729 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af45cb6f-fed7-415b-bf75-e742a875a1c0-socket-dir\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.134505 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bde0d1cd-9913-424d-886d-720a6d29cab8-cert\") pod \"ingress-canary-c8v2t\" (UID: \"bde0d1cd-9913-424d-886d-720a6d29cab8\") " pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.134620 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/844845db-0b6a-4365-aa6e-760f4dd1157d-proxy-tls\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.134809 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.136995 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5a619d49-cea1-437d-8b81-6947b3085376-signing-key\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.137665 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69074044-d0c1-4f42-bd99-bce808257377-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czc9t\" (UID: \"69074044-d0c1-4f42-bd99-bce808257377\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.139197 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-secret-volume\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.139731 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/002000ac-5a96-4bb1-985a-7863f4c9e05a-certs\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.139737 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/00ada162-924a-42f2-85b2-62e5df70027d-srv-cert\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.140742 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6b18826-96e9-4e47-9f95-3f19f298683b-srv-cert\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.140946 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fddd9483-5921-4244-af22-39499cfcd168-apiservice-cert\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.142097 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3751d59c-0c3d-4000-b034-7b33039c7930-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q52hx\" (UID: \"3751d59c-0c3d-4000-b034-7b33039c7930\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.142229 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6b18826-96e9-4e47-9f95-3f19f298683b-profile-collector-cert\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.142495 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fddd9483-5921-4244-af22-39499cfcd168-webhook-cert\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.143246 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/002000ac-5a96-4bb1-985a-7863f4c9e05a-node-bootstrap-token\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.143321 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.144621 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fbbd315e-478d-48fe-8df6-a58a288adba0-metrics-tls\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.145517 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-serving-cert\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.148831 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgq8\" (UniqueName: \"kubernetes.io/projected/205d682b-7592-4b53-bcf6-0300c1084046-kube-api-access-dsgq8\") pod \"route-controller-manager-6576b87f9c-zmppb\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.166642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76924\" (UniqueName: \"kubernetes.io/projected/cb2788e1-4aae-4991-be1e-78f77e0e0811-kube-api-access-76924\") pod \"cluster-image-registry-operator-dc59b4c8b-2vccg\" (UID: \"cb2788e1-4aae-4991-be1e-78f77e0e0811\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:34 crc kubenswrapper[4718]: W1123 14:47:34.169528 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a6d300_fa67_4762_9025_232fcb2ea96d.slice/crio-72233093c5d5db2cd9cc2d1fa0e24dbaa7646aea7dc5741320d637bda3327c6e WatchSource:0}: Error finding container 72233093c5d5db2cd9cc2d1fa0e24dbaa7646aea7dc5741320d637bda3327c6e: Status 404 returned error can't find the container with id 72233093c5d5db2cd9cc2d1fa0e24dbaa7646aea7dc5741320d637bda3327c6e Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.188263 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" event={"ID":"2635b714-cae2-41c2-8a7b-87075c04e2b3","Type":"ContainerStarted","Data":"d7f8218a60bddb03a076854cfbd5bb93e883f06b09e454283b1f1f52cc3ee0bd"} Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.191795 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" event={"ID":"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f","Type":"ContainerStarted","Data":"afe9a558228a4f79f6239720a772c7b8456787967e32b8f81d3927c248d4af31"} Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.194986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjtz\" (UniqueName: \"kubernetes.io/projected/cd397098-2850-4c0e-aa5b-5147532da7a5-kube-api-access-kwjtz\") pod \"apiserver-7bbb656c7d-pfnl4\" (UID: \"cd397098-2850-4c0e-aa5b-5147532da7a5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.198549 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" event={"ID":"95464c4e-4616-4ab3-9928-4dc41beee4af","Type":"ContainerStarted","Data":"878089f2f0f52555a2f62a82959a7c1ea31f3d65db0e14e3642480924af4de58"} Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.203670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" event={"ID":"d156eabe-7e2f-4336-acaa-4af08cfdea8d","Type":"ContainerStarted","Data":"94670ebbeffce3b215ffa46ff36c5acbb6046f079985f7769ab5e05f7861b84d"} Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.204094 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" event={"ID":"d156eabe-7e2f-4336-acaa-4af08cfdea8d","Type":"ContainerStarted","Data":"a175640d99570303efe775e577379f1c86753b46fec8287408b6b04d9d64ff2c"} Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.215929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55xk\" (UniqueName: \"kubernetes.io/projected/828f7457-8fd5-4c12-9f95-1dd938faacb0-kube-api-access-z55xk\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.225455 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pczmz\" (UniqueName: \"kubernetes.io/projected/4281782c-70f4-442f-814b-2e60ea9dae88-kube-api-access-pczmz\") pod \"router-default-5444994796-7f7vb\" (UID: \"4281782c-70f4-442f-814b-2e60ea9dae88\") " pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.225486 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.225628 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.225647 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.725630859 +0000 UTC m=+105.965250703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.225917 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.226234 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.726219524 +0000 UTC m=+105.965839368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.230368 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.250533 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.250776 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-bound-sa-token\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.272169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/828f7457-8fd5-4c12-9f95-1dd938faacb0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m47rw\" (UID: \"828f7457-8fd5-4c12-9f95-1dd938faacb0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.274789 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xwp8r"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.281073 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.289066 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" Nov 23 14:47:34 crc kubenswrapper[4718]: W1123 14:47:34.294078 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef388340_e800_4dbf_a7cf_6d6bd9a1eecf.slice/crio-5a400f702d1b95e53719b526f10044a6491aaa4c062fe0f0ea599df7ecab6d22 WatchSource:0}: Error finding container 5a400f702d1b95e53719b526f10044a6491aaa4c062fe0f0ea599df7ecab6d22: Status 404 returned error can't find the container with id 5a400f702d1b95e53719b526f10044a6491aaa4c062fe0f0ea599df7ecab6d22 Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.306415 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ee1fe20-1606-4835-99b5-7cd564600ea7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jznqv\" (UID: \"0ee1fe20-1606-4835-99b5-7cd564600ea7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.323978 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhtm4\" (UniqueName: \"kubernetes.io/projected/002000ac-5a96-4bb1-985a-7863f4c9e05a-kube-api-access-qhtm4\") pod \"machine-config-server-mcr5m\" (UID: \"002000ac-5a96-4bb1-985a-7863f4c9e05a\") " pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.329369 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.329862 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.330046 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.330200 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.830172748 +0000 UTC m=+106.069792592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.330412 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.330967 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.830954918 +0000 UTC m=+106.070574762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.348259 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr4lk\" (UniqueName: \"kubernetes.io/projected/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-kube-api-access-gr4lk\") pod \"collect-profiles-29398485-2272c\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.349516 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.357116 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.374521 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lglwp"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.375275 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl6hg\" (UniqueName: \"kubernetes.io/projected/fbbd315e-478d-48fe-8df6-a58a288adba0-kube-api-access-rl6hg\") pod \"dns-default-jh7tn\" (UID: \"fbbd315e-478d-48fe-8df6-a58a288adba0\") " pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.377635 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.386396 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.388114 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nfkmp"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.394407 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkqrz\" (UniqueName: \"kubernetes.io/projected/00ada162-924a-42f2-85b2-62e5df70027d-kube-api-access-gkqrz\") pod \"olm-operator-6b444d44fb-42h2c\" (UID: \"00ada162-924a-42f2-85b2-62e5df70027d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.412170 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.413195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wh2p\" (UniqueName: \"kubernetes.io/projected/fddd9483-5921-4244-af22-39499cfcd168-kube-api-access-7wh2p\") pod \"packageserver-d55dfcdfc-gzctt\" (UID: \"fddd9483-5921-4244-af22-39499cfcd168\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.424138 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.429405 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bzr6j"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.429591 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8p8f\" (UniqueName: \"kubernetes.io/projected/5a619d49-cea1-437d-8b81-6947b3085376-kube-api-access-b8p8f\") pod \"service-ca-9c57cc56f-tdwkr\" (UID: \"5a619d49-cea1-437d-8b81-6947b3085376\") " pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.432507 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.432669 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.932650595 +0000 UTC m=+106.172270439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.436644 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.437098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.437495 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:34.937484577 +0000 UTC m=+106.177104421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.446042 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.463178 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.463642 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.466237 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rbj\" (UniqueName: \"kubernetes.io/projected/844845db-0b6a-4365-aa6e-760f4dd1157d-kube-api-access-66rbj\") pod \"machine-config-controller-84d6567774-gptqg\" (UID: \"844845db-0b6a-4365-aa6e-760f4dd1157d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.491545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrqgc\" (UniqueName: \"kubernetes.io/projected/7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0-kube-api-access-lrqgc\") pod \"service-ca-operator-777779d784-wffdm\" (UID: \"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.497937 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mcr5m" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.508120 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6swh4\" (UniqueName: \"kubernetes.io/projected/bf1008cf-6837-4089-ae38-2e44add1cfa5-kube-api-access-6swh4\") pod \"marketplace-operator-79b997595-qpsmt\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.518300 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.535576 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgk65\" (UniqueName: \"kubernetes.io/projected/b6b18826-96e9-4e47-9f95-3f19f298683b-kube-api-access-kgk65\") pod \"catalog-operator-68c6474976-7x8l8\" (UID: \"b6b18826-96e9-4e47-9f95-3f19f298683b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.536282 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-45fwt"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.537217 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.538967 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.539396 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.039382079 +0000 UTC m=+106.279001923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.550372 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.565468 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz45x\" (UniqueName: \"kubernetes.io/projected/69074044-d0c1-4f42-bd99-bce808257377-kube-api-access-hz45x\") pod \"package-server-manager-789f6589d5-czc9t\" (UID: \"69074044-d0c1-4f42-bd99-bce808257377\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.567202 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sn5vq"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.569686 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mkr\" (UniqueName: \"kubernetes.io/projected/af45cb6f-fed7-415b-bf75-e742a875a1c0-kube-api-access-h7mkr\") pod \"csi-hostpathplugin-5qvjc\" (UID: \"af45cb6f-fed7-415b-bf75-e742a875a1c0\") " pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: W1123 14:47:34.571226 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc94ba48a_8de9_40a2_9c80_d4496b7b8cf6.slice/crio-f405707d531278966f904098deb0015ac14193693bebd19103da11f5716395b6 WatchSource:0}: Error finding container f405707d531278966f904098deb0015ac14193693bebd19103da11f5716395b6: Status 404 returned error can't find the container with id f405707d531278966f904098deb0015ac14193693bebd19103da11f5716395b6 Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.576535 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2tfr\" (UniqueName: \"kubernetes.io/projected/bde0d1cd-9913-424d-886d-720a6d29cab8-kube-api-access-x2tfr\") pod \"ingress-canary-c8v2t\" (UID: \"bde0d1cd-9913-424d-886d-720a6d29cab8\") " pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.587369 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q88pg\" (UniqueName: \"kubernetes.io/projected/3751d59c-0c3d-4000-b034-7b33039c7930-kube-api-access-q88pg\") pod \"multus-admission-controller-857f4d67dd-q52hx\" (UID: \"3751d59c-0c3d-4000-b034-7b33039c7930\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.640037 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.640327 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.140317106 +0000 UTC m=+106.379936950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: W1123 14:47:34.645915 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fb8d21d_864e_4fa4_9a79_0ec8c8160fd4.slice/crio-083cb288b51d8cb140737abfec7fb708abb0725150cf37f193ada639fb98f703 WatchSource:0}: Error finding container 083cb288b51d8cb140737abfec7fb708abb0725150cf37f193ada639fb98f703: Status 404 returned error can't find the container with id 083cb288b51d8cb140737abfec7fb708abb0725150cf37f193ada639fb98f703 Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.655112 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.666041 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hnvnt"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.697886 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.698471 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.705961 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.717362 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c8v2t" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.723236 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.728279 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.728704 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.741059 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.741506 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.241489839 +0000 UTC m=+106.481109683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.746426 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.748993 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.791767 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.830106 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.845091 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.845479 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.345426643 +0000 UTC m=+106.585046487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.853600 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.914125 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw"] Nov 23 14:47:34 crc kubenswrapper[4718]: I1123 14:47:34.946640 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:34 crc kubenswrapper[4718]: E1123 14:47:34.946924 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.446909974 +0000 UTC m=+106.686529818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.050077 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.050803 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.550789536 +0000 UTC m=+106.790409380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.135028 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.151804 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.151996 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.65194397 +0000 UTC m=+106.891563814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.152265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.152720 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.652707919 +0000 UTC m=+106.892327763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.224913 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tdwkr"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.226184 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" event={"ID":"56de4d78-445d-4e86-80eb-6096912ef506","Type":"ContainerStarted","Data":"905f9bde0bc6d7a94bde3da1c51828bb5d640fadb2601e86dd9cbab7ca0fcf52"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.234926 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.243215 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bzr6j" event={"ID":"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2","Type":"ContainerStarted","Data":"9bec6fed55e9a7c30375f4aa36daa7fe898bc21549bfab0c3cbe97c54c1f6db7"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.251561 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" event={"ID":"95464c4e-4616-4ab3-9928-4dc41beee4af","Type":"ContainerStarted","Data":"89eb70ea940a25e803eeda4a4df6a1b07eaf298686995a09715810671f35c30f"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.251601 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" event={"ID":"95464c4e-4616-4ab3-9928-4dc41beee4af","Type":"ContainerStarted","Data":"984866e07b7c83d478cab18079cc21b9782e22bdb0b3d5c588ff52a56a01b5ca"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.252711 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.252832 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.752801925 +0000 UTC m=+106.992421779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.253046 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.253294 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.753282617 +0000 UTC m=+106.992902461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.264564 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" event={"ID":"cd397098-2850-4c0e-aa5b-5147532da7a5","Type":"ContainerStarted","Data":"4c2c10b1818be254f96291b13b500f5d9b82cdf26c2c7328da0671f172b1c291"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.344798 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" event={"ID":"d156eabe-7e2f-4336-acaa-4af08cfdea8d","Type":"ContainerStarted","Data":"3a9a04ba7d2d6c061a779176d07e36b4e752ad6c018436c1c8b87fba2ef3d5ae"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.347548 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" event={"ID":"828f7457-8fd5-4c12-9f95-1dd938faacb0","Type":"ContainerStarted","Data":"3319322be72c3863921b8e5da7f4d51fe942ba17a85fa7b3599a53b7994c2570"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.365211 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.365356 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.865318316 +0000 UTC m=+107.104938160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.365626 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.368544 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.868521306 +0000 UTC m=+107.108141360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.372491 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" event={"ID":"26893a84-2d41-4266-80db-235e4057a14f","Type":"ContainerStarted","Data":"7653f3f90a2365e8ff626fbc86732a4c0364ef97d2787d2b89a703f941d72f8b"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.388784 4718 generic.go:334] "Generic (PLEG): container finished" podID="2635b714-cae2-41c2-8a7b-87075c04e2b3" containerID="90a69fe4a2824e79b3313a044e14402749dc0c25cb60097fef6df52343e06d68" exitCode=0 Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.389304 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" event={"ID":"2635b714-cae2-41c2-8a7b-87075c04e2b3","Type":"ContainerDied","Data":"90a69fe4a2824e79b3313a044e14402749dc0c25cb60097fef6df52343e06d68"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.392969 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" event={"ID":"cb2788e1-4aae-4991-be1e-78f77e0e0811","Type":"ContainerStarted","Data":"36a3d0ce498bf97d42a14436128f920e71270b333b26c3adb320a01eee0a0750"} Nov 23 14:47:35 crc kubenswrapper[4718]: W1123 14:47:35.395825 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfddd9483_5921_4244_af22_39499cfcd168.slice/crio-b22080a5c410a7769301eedc13b5b7a21e674fa01a38080c540e83c17dec001e WatchSource:0}: Error finding container b22080a5c410a7769301eedc13b5b7a21e674fa01a38080c540e83c17dec001e: Status 404 returned error can't find the container with id b22080a5c410a7769301eedc13b5b7a21e674fa01a38080c540e83c17dec001e Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.415970 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" event={"ID":"77ec520a-813e-497b-97bb-c0271026540b","Type":"ContainerStarted","Data":"c0833d51209f03835f5c5bd5643b4c4f037f083bfc65b728323a9f1e9ceebef6"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.422580 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" event={"ID":"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e","Type":"ContainerStarted","Data":"85ca6c23f45ed67f7a78cb8f54cdf8c67f101f254cca9e490cd1566739b5be91"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.467049 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.467862 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.967828883 +0000 UTC m=+107.207448727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.468070 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.469117 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:35.969103245 +0000 UTC m=+107.208723089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.505876 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" event={"ID":"7600967f-215f-411c-b19a-8433e7a266ee","Type":"ContainerStarted","Data":"3f869281e8b14027185ede36e868db8f511669e9108ba2b80512c4a16dcba55e"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.519169 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" event={"ID":"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf","Type":"ContainerStarted","Data":"22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.519887 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" event={"ID":"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf","Type":"ContainerStarted","Data":"5a400f702d1b95e53719b526f10044a6491aaa4c062fe0f0ea599df7ecab6d22"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.523252 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.551834 4718 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xwp8r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.551909 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" podUID="ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.560711 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" event={"ID":"205d682b-7592-4b53-bcf6-0300c1084046","Type":"ContainerStarted","Data":"5e0ef202895d3a79d5fe35cf26313eb13a7bae3fa46e02f84471d3f6dcbe0450"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.563040 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.601240 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" event={"ID":"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6","Type":"ContainerStarted","Data":"f405707d531278966f904098deb0015ac14193693bebd19103da11f5716395b6"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.607229 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.608321 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.108299713 +0000 UTC m=+107.347919557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.609886 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jh7tn"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.638622 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.656774 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" event={"ID":"fa4bc064-a334-47bd-820e-00ced1c89025","Type":"ContainerStarted","Data":"5fc14c1ec8d617a1f2ea56d94ff9aa9625ecf0987b7a04a91528ab659d028f72"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.682493 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.708992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.710430 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.210414499 +0000 UTC m=+107.450034343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.726839 4718 patch_prober.go:28] interesting pod/console-operator-58897d9998-nfkmp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.726887 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" podUID="5607cd92-eff9-4eff-8dc7-4c0999b88fd4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.687567 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" event={"ID":"5607cd92-eff9-4eff-8dc7-4c0999b88fd4","Type":"ContainerStarted","Data":"22892a857f165e137cca6f48ce2b27d2ea2bc97ac3bfa6a8adc7e2b1338b1199"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.732861 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.732883 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" event={"ID":"5607cd92-eff9-4eff-8dc7-4c0999b88fd4","Type":"ContainerStarted","Data":"5aa8a845188af44d66c37f08cfbbc3479ad9859694831dfdb3cebb6014fa7595"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.732904 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" event={"ID":"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f","Type":"ContainerStarted","Data":"8dcc9647fb073adc4fb6926b7f0035494599df6978f9139c216dd5fd80f15569"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.732924 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.732948 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" event={"ID":"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f","Type":"ContainerStarted","Data":"cb93e265c92075cd99293936079eae63115f4ba61d10ee9015c4375344bbb9bd"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.740038 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.742935 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" event={"ID":"51a36f59-fabf-4d64-bd21-c294a70460cd","Type":"ContainerStarted","Data":"3c275e3f2d35ac6a579977877ee6b167909d8f88474df4261f3a29616fed9b4a"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.749860 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" event={"ID":"bb094bc3-3ea8-4a2b-9f41-61621ba47667","Type":"ContainerStarted","Data":"77f180b58b2539b1fe3ef92ff22e8a0b8ea5deb047e83deab21ad37b9ce1c802"} Nov 23 14:47:35 crc kubenswrapper[4718]: W1123 14:47:35.752803 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00ada162_924a_42f2_85b2_62e5df70027d.slice/crio-9611da283ac4faaca15983210fbd912939ae4052e259f2c9a4bdf3036c2741ab WatchSource:0}: Error finding container 9611da283ac4faaca15983210fbd912939ae4052e259f2c9a4bdf3036c2741ab: Status 404 returned error can't find the container with id 9611da283ac4faaca15983210fbd912939ae4052e259f2c9a4bdf3036c2741ab Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.758719 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mcr5m" event={"ID":"002000ac-5a96-4bb1-985a-7863f4c9e05a","Type":"ContainerStarted","Data":"a464c4f589483f387fbb4079ba16f0d20b6b79bb211716ccbfe8ab1a50ced5df"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.764285 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7hg9r" event={"ID":"87a6d300-fa67-4762-9025-232fcb2ea96d","Type":"ContainerStarted","Data":"2ea823a387f6e719a74d9d4959dcfaad17f2247e0fd53255dea1fdddbc697ae3"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.764320 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7hg9r" event={"ID":"87a6d300-fa67-4762-9025-232fcb2ea96d","Type":"ContainerStarted","Data":"72233093c5d5db2cd9cc2d1fa0e24dbaa7646aea7dc5741320d637bda3327c6e"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.766824 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7hg9r" Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.776779 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-7hg9r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.776834 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7hg9r" podUID="87a6d300-fa67-4762-9025-232fcb2ea96d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.796334 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qpsmt"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.804732 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" event={"ID":"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4","Type":"ContainerStarted","Data":"083cb288b51d8cb140737abfec7fb708abb0725150cf37f193ada639fb98f703"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.809730 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.811197 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" event={"ID":"ff9a491e-b915-4980-92f9-71844bc90a65","Type":"ContainerStarted","Data":"d391a3b86f0d4751450743d5d44cb37f474609f1c21aafad0238eeeaf749a37a"} Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.811585 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.311560862 +0000 UTC m=+107.551180706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.822821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7f7vb" event={"ID":"4281782c-70f4-442f-814b-2e60ea9dae88","Type":"ContainerStarted","Data":"858e13f5ac66597dcdca59db33d82b34dbe65950dfaf6f6859ce9be90b19ffca"} Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.850429 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.868888 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q52hx"] Nov 23 14:47:35 crc kubenswrapper[4718]: I1123 14:47:35.913526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:35 crc kubenswrapper[4718]: E1123 14:47:35.916494 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.416481931 +0000 UTC m=+107.656101775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.014574 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.014725 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.514701799 +0000 UTC m=+107.754321643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.015763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.016085 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.516069904 +0000 UTC m=+107.755689748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.060727 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c8v2t"] Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.118226 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.118647 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.618627212 +0000 UTC m=+107.858247056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.124626 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.125569 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.625557488 +0000 UTC m=+107.865177332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.135977 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7vdf5" podStartSLOduration=75.135950471 podStartE2EDuration="1m15.135950471s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.131210321 +0000 UTC m=+107.370830155" watchObservedRunningTime="2025-11-23 14:47:36.135950471 +0000 UTC m=+107.375570315" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.174457 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" podStartSLOduration=75.174420256 podStartE2EDuration="1m15.174420256s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.167847359 +0000 UTC m=+107.407467203" watchObservedRunningTime="2025-11-23 14:47:36.174420256 +0000 UTC m=+107.414040100" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.189846 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wffdm"] Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.218996 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7f7vb" podStartSLOduration=75.218980125 podStartE2EDuration="1m15.218980125s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.215917527 +0000 UTC m=+107.455537371" watchObservedRunningTime="2025-11-23 14:47:36.218980125 +0000 UTC m=+107.458599959" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.226922 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.227264 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.727251965 +0000 UTC m=+107.966871809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.243242 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5qvjc"] Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.248504 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8"] Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.260126 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" podStartSLOduration=75.260097637 podStartE2EDuration="1m15.260097637s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.248584475 +0000 UTC m=+107.488204309" watchObservedRunningTime="2025-11-23 14:47:36.260097637 +0000 UTC m=+107.499717521" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.295501 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" podStartSLOduration=75.295486373 podStartE2EDuration="1m15.295486373s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.285870069 +0000 UTC m=+107.525489913" watchObservedRunningTime="2025-11-23 14:47:36.295486373 +0000 UTC m=+107.535106217" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.328371 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.328843 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.828830088 +0000 UTC m=+108.068449932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.330203 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.337753 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnl8l" podStartSLOduration=75.335991839 podStartE2EDuration="1m15.335991839s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.333018914 +0000 UTC m=+107.572638758" watchObservedRunningTime="2025-11-23 14:47:36.335991839 +0000 UTC m=+107.575611683" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.341346 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.341383 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.365166 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7hg9r" podStartSLOduration=75.365150068 podStartE2EDuration="1m15.365150068s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.362000909 +0000 UTC m=+107.601620753" watchObservedRunningTime="2025-11-23 14:47:36.365150068 +0000 UTC m=+107.604769912" Nov 23 14:47:36 crc kubenswrapper[4718]: W1123 14:47:36.388928 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf45cb6f_fed7_415b_bf75_e742a875a1c0.slice/crio-34fad39af7078f5871e7092fe0115c6bbcb04ef902c3ea94b3a072832cf0404e WatchSource:0}: Error finding container 34fad39af7078f5871e7092fe0115c6bbcb04ef902c3ea94b3a072832cf0404e: Status 404 returned error can't find the container with id 34fad39af7078f5871e7092fe0115c6bbcb04ef902c3ea94b3a072832cf0404e Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.430574 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.430985 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:36.930971996 +0000 UTC m=+108.170591840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.445701 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" podStartSLOduration=75.445684149 podStartE2EDuration="1m15.445684149s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.445168696 +0000 UTC m=+107.684788530" watchObservedRunningTime="2025-11-23 14:47:36.445684149 +0000 UTC m=+107.685303993" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.446227 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mcr5m" podStartSLOduration=5.446222013 podStartE2EDuration="5.446222013s" podCreationTimestamp="2025-11-23 14:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.403743757 +0000 UTC m=+107.643363601" watchObservedRunningTime="2025-11-23 14:47:36.446222013 +0000 UTC m=+107.685841867" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.485545 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" podStartSLOduration=75.485528749 podStartE2EDuration="1m15.485528749s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.485290703 +0000 UTC m=+107.724910547" watchObservedRunningTime="2025-11-23 14:47:36.485528749 +0000 UTC m=+107.725148593" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.533293 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.533607 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.033595516 +0000 UTC m=+108.273215360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.634346 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.634547 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.134522273 +0000 UTC m=+108.374142117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.634788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.635064 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.135053277 +0000 UTC m=+108.374673121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.740155 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.740531 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.240512419 +0000 UTC m=+108.480132263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.740590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.740902 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.240880968 +0000 UTC m=+108.480500812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.835196 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" event={"ID":"0ee1fe20-1606-4835-99b5-7cd564600ea7","Type":"ContainerStarted","Data":"05f99c07e3b422c7eae39d423815e7b278731e76dae1f1afba8ba8451e55bcc4"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.835586 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" event={"ID":"0ee1fe20-1606-4835-99b5-7cd564600ea7","Type":"ContainerStarted","Data":"94ced898078de852f0b56debd0f0f7ae5c38c6ec78d14a46ea3df00f85e8f3ae"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.838176 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jh7tn" event={"ID":"fbbd315e-478d-48fe-8df6-a58a288adba0","Type":"ContainerStarted","Data":"0e557f704a59146d3be6bbd3fcb26e140831c9912b27553ebefb75cf078deb4f"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.838209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jh7tn" event={"ID":"fbbd315e-478d-48fe-8df6-a58a288adba0","Type":"ContainerStarted","Data":"7505d6804ad7db513adfa04eaa9008bd2a9c5fc3f3eb9658c61e10dee82038d3"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.843331 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.843860 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.343832207 +0000 UTC m=+108.583452111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.844053 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.844342 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.344329869 +0000 UTC m=+108.583949713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.847106 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" event={"ID":"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6","Type":"ContainerStarted","Data":"84b64d2f686514e55440c195f60258fe5b920225914d95ab21c6c2c1b24ad3ca"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.852216 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" event={"ID":"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0","Type":"ContainerStarted","Data":"607228dfa39c3a66feb8352f3d46ad22d939ce55cb213fdd2b259457d9119deb"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.852265 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" event={"ID":"7ccfcbc9-c42e-4cf2-bfa3-a3c6e471b9f0","Type":"ContainerStarted","Data":"3b69f04d29abf4bdfa77dc7849e04c112a722fe2e8de8aa05512fecd9baf981c"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.854763 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" event={"ID":"03f55fc0-e04d-4f3a-8869-80cbb53c26ee","Type":"ContainerStarted","Data":"91b22b238a43688558fa254177d0b46f77b76822e841892653cadc58961a9f8f"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.854825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" event={"ID":"03f55fc0-e04d-4f3a-8869-80cbb53c26ee","Type":"ContainerStarted","Data":"f12ca1ac96a2e3d42c6a13427a8a0ae9ddc9b091daa3f3ce9e2813c727b1bd05"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.862141 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jznqv" podStartSLOduration=75.86211683 podStartE2EDuration="1m15.86211683s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.86093011 +0000 UTC m=+108.100549954" watchObservedRunningTime="2025-11-23 14:47:36.86211683 +0000 UTC m=+108.101736694" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.863024 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" event={"ID":"b6b18826-96e9-4e47-9f95-3f19f298683b","Type":"ContainerStarted","Data":"45df6f2498444c03621a6003fd51836f59078abff98e064959443defb96e3098"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.884114 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wffdm" podStartSLOduration=75.884072737 podStartE2EDuration="1m15.884072737s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.881465101 +0000 UTC m=+108.121084955" watchObservedRunningTime="2025-11-23 14:47:36.884072737 +0000 UTC m=+108.123692591" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.890318 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" event={"ID":"bf1008cf-6837-4089-ae38-2e44add1cfa5","Type":"ContainerStarted","Data":"49c65404c950eabfaf34b7dd1c1f8ea41441d42b7b161abfae4f1559d4d6acab"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.890366 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" event={"ID":"bf1008cf-6837-4089-ae38-2e44add1cfa5","Type":"ContainerStarted","Data":"2cbecd873d72ac4725ceacfeff6020fa58f8449248e3b68623a293cd4db49a45"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.891641 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.902649 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qpsmt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.902703 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.906406 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bzr6j" event={"ID":"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2","Type":"ContainerStarted","Data":"dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4"} Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.910019 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" podStartSLOduration=75.910001503 podStartE2EDuration="1m15.910001503s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.90789815 +0000 UTC m=+108.147517994" watchObservedRunningTime="2025-11-23 14:47:36.910001503 +0000 UTC m=+108.149621337" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.940643 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" podStartSLOduration=75.940626999 podStartE2EDuration="1m15.940626999s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.939310746 +0000 UTC m=+108.178930600" watchObservedRunningTime="2025-11-23 14:47:36.940626999 +0000 UTC m=+108.180246843" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.945075 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:36 crc kubenswrapper[4718]: E1123 14:47:36.945388 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.445359799 +0000 UTC m=+108.684979663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.966612 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bzr6j" podStartSLOduration=75.966598188 podStartE2EDuration="1m15.966598188s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:36.965006677 +0000 UTC m=+108.204626521" watchObservedRunningTime="2025-11-23 14:47:36.966598188 +0000 UTC m=+108.206218032" Nov 23 14:47:36 crc kubenswrapper[4718]: I1123 14:47:36.982165 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7f7vb" event={"ID":"4281782c-70f4-442f-814b-2e60ea9dae88","Type":"ContainerStarted","Data":"2b40b8192996321a32507ee646736c7d7d1ed13bcb8c24679cbf8e5e060f2fbe"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.006282 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" event={"ID":"77ec520a-813e-497b-97bb-c0271026540b","Type":"ContainerStarted","Data":"e129d9a445453207b9a9588f6dd3ed8560bde6607f2e867a67e4d857424dc221"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.028936 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-sn5vq" podStartSLOduration=76.028902226 podStartE2EDuration="1m16.028902226s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.027052469 +0000 UTC m=+108.266672313" watchObservedRunningTime="2025-11-23 14:47:37.028902226 +0000 UTC m=+108.268522070" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.056657 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.059221 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.559200704 +0000 UTC m=+108.798820628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.071746 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" event={"ID":"6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e","Type":"ContainerStarted","Data":"7bae6616684e892f60f05a9ffe2fdf13033160ff88df8bb450d80ac0f032b6d4"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.096515 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8xb95" podStartSLOduration=76.096495959 podStartE2EDuration="1m16.096495959s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.096310394 +0000 UTC m=+108.335930238" watchObservedRunningTime="2025-11-23 14:47:37.096495959 +0000 UTC m=+108.336115813" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.118377 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" event={"ID":"828f7457-8fd5-4c12-9f95-1dd938faacb0","Type":"ContainerStarted","Data":"2aed5a3bd22ff441ee64d3e6ce789c3f5cee67dccf42e298bc0fa21284f84c87"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.147649 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" event={"ID":"56de4d78-445d-4e86-80eb-6096912ef506","Type":"ContainerStarted","Data":"70e9498a7e5339fce8db5ac0234fa1d3d54606752fd0dc7665701db849532cb1"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.158985 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.159927 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.659906936 +0000 UTC m=+108.899526780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.161430 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" event={"ID":"4fb8d21d-864e-4fa4-9a79-0ec8c8160fd4","Type":"ContainerStarted","Data":"0781f0e8d38e7022dc2266661cad024c5b6e7123d8acd853e2b62aaa3ed1830c"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.187399 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" event={"ID":"9a631d45-c084-4027-98c1-2d0faf43d7bc","Type":"ContainerStarted","Data":"564030157d4c279ec181aeb91700af1102e43751755dda37e83d8b3fc675ff98"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.187693 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" event={"ID":"9a631d45-c084-4027-98c1-2d0faf43d7bc","Type":"ContainerStarted","Data":"d37ae6a68ede402e005f309f9e9fb3bd48d941c273504add2ad0f50ffea78bb7"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.209362 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" podStartSLOduration=76.209347418 podStartE2EDuration="1m16.209347418s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.149417759 +0000 UTC m=+108.389037603" watchObservedRunningTime="2025-11-23 14:47:37.209347418 +0000 UTC m=+108.448967262" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.210368 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mskq" podStartSLOduration=76.210362314 podStartE2EDuration="1m16.210362314s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.207941942 +0000 UTC m=+108.447561786" watchObservedRunningTime="2025-11-23 14:47:37.210362314 +0000 UTC m=+108.449982158" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.214035 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c8v2t" event={"ID":"bde0d1cd-9913-424d-886d-720a6d29cab8","Type":"ContainerStarted","Data":"eb369779c9df45417bb8d460cf94eb8b445e4190b1101d648ffe9d5b36a072f1"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.214077 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c8v2t" event={"ID":"bde0d1cd-9913-424d-886d-720a6d29cab8","Type":"ContainerStarted","Data":"e5adf09e2eca516e15bcc107a68418859449b823c5fb565813c241292b0632e2"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.245360 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vl2sm" podStartSLOduration=76.24533648 podStartE2EDuration="1m16.24533648s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.239705928 +0000 UTC m=+108.479325782" watchObservedRunningTime="2025-11-23 14:47:37.24533648 +0000 UTC m=+108.484956324" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.257554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" event={"ID":"844845db-0b6a-4365-aa6e-760f4dd1157d","Type":"ContainerStarted","Data":"7a14b8a8dbbb8f3b86057b3742b7457bc39718a58020995bdfdd58ba2f435471"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.257599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" event={"ID":"844845db-0b6a-4365-aa6e-760f4dd1157d","Type":"ContainerStarted","Data":"97fcc91ca84acdce4aa998ed77ffb8e9c7d16fad095d785effaea14496cd415c"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.260217 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.261543 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.76152429 +0000 UTC m=+109.001144134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.274196 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnph" podStartSLOduration=76.27417668 podStartE2EDuration="1m16.27417668s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.265191453 +0000 UTC m=+108.504811297" watchObservedRunningTime="2025-11-23 14:47:37.27417668 +0000 UTC m=+108.513796524" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.276451 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" event={"ID":"26893a84-2d41-4266-80db-235e4057a14f","Type":"ContainerStarted","Data":"ab3743068f579bd8bfb3637828fa82f745964acc653c2c20056427ff7de084cd"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.276493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" event={"ID":"26893a84-2d41-4266-80db-235e4057a14f","Type":"ContainerStarted","Data":"d6d8f1d10ea683e0f11073e9bbf43d6ab08bad8c700bced31ef725c6e9e6333c"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.292165 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" podStartSLOduration=76.292148756 podStartE2EDuration="1m16.292148756s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.29192868 +0000 UTC m=+108.531548524" watchObservedRunningTime="2025-11-23 14:47:37.292148756 +0000 UTC m=+108.531768600" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.306118 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" event={"ID":"fddd9483-5921-4244-af22-39499cfcd168","Type":"ContainerStarted","Data":"f4b9b619387ddb3fba2228453b5d33d586eece3a5f55b6da7ce4076c53848569"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.306162 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" event={"ID":"fddd9483-5921-4244-af22-39499cfcd168","Type":"ContainerStarted","Data":"b22080a5c410a7769301eedc13b5b7a21e674fa01a38080c540e83c17dec001e"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.306937 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.313917 4718 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gzctt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.313974 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" podUID="fddd9483-5921-4244-af22-39499cfcd168" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.319299 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-c8v2t" podStartSLOduration=6.319286624 podStartE2EDuration="6.319286624s" podCreationTimestamp="2025-11-23 14:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.318779941 +0000 UTC m=+108.558399785" watchObservedRunningTime="2025-11-23 14:47:37.319286624 +0000 UTC m=+108.558906468" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.325676 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" event={"ID":"24a77fa6-f458-4337-bafc-20b5268bc357","Type":"ContainerStarted","Data":"ddb029e03121fe6eaa574057c95344ea85fc17564f6704bf7c6e5f7d1e3da52d"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.325722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" event={"ID":"24a77fa6-f458-4337-bafc-20b5268bc357","Type":"ContainerStarted","Data":"8477708676a4322232ed641b409d267cb37ba48c4dd23c6891147ddaf4f5a31e"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.346248 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:37 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:37 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:37 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.346313 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.353907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" event={"ID":"2635b714-cae2-41c2-8a7b-87075c04e2b3","Type":"ContainerStarted","Data":"009f8bafe96b198343b8f8409e230b1a0f7864fff3d9357269704cfb43c449cb"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.360329 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" event={"ID":"cb2788e1-4aae-4991-be1e-78f77e0e0811","Type":"ContainerStarted","Data":"799379df7a10a23dc9eec574ee6be9ab333bad58e60d28e002b2470a2e24d707"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.361096 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.361521 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.861505594 +0000 UTC m=+109.101125438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.361636 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.363170 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.863162476 +0000 UTC m=+109.102782320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.383061 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" podStartSLOduration=76.383043329 podStartE2EDuration="1m16.383043329s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.382158226 +0000 UTC m=+108.621778070" watchObservedRunningTime="2025-11-23 14:47:37.383043329 +0000 UTC m=+108.622663173" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.383865 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mzr52" podStartSLOduration=76.38385924 podStartE2EDuration="1m16.38385924s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.356887756 +0000 UTC m=+108.596507600" watchObservedRunningTime="2025-11-23 14:47:37.38385924 +0000 UTC m=+108.623479084" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.387273 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" event={"ID":"69074044-d0c1-4f42-bd99-bce808257377","Type":"ContainerStarted","Data":"b5416f1b3b5c898b93bf58da022e1735f6c6fc2d53c0b6c20bbc6d05549f4311"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.387319 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" event={"ID":"69074044-d0c1-4f42-bd99-bce808257377","Type":"ContainerStarted","Data":"fd3f3a018f6409a850b7fdff0373ced996cf3028fdf5e14198cdd391f0204412"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.388057 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.393199 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mcr5m" event={"ID":"002000ac-5a96-4bb1-985a-7863f4c9e05a","Type":"ContainerStarted","Data":"56a9a662d94e4ae8010da15bf37a6253e192fa0a3614db69a42b27cdb6486f6e"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.405624 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8cmz" event={"ID":"51a36f59-fabf-4d64-bd21-c294a70460cd","Type":"ContainerStarted","Data":"27bb8b722897f1da4a1107bf262b5486a64b39250938d81e21974a5afb684553"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.411322 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" event={"ID":"af45cb6f-fed7-415b-bf75-e742a875a1c0","Type":"ContainerStarted","Data":"34fad39af7078f5871e7092fe0115c6bbcb04ef902c3ea94b3a072832cf0404e"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.411471 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" podStartSLOduration=76.411456309 podStartE2EDuration="1m16.411456309s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.410133155 +0000 UTC m=+108.649752999" watchObservedRunningTime="2025-11-23 14:47:37.411456309 +0000 UTC m=+108.651076153" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.420945 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" event={"ID":"5a619d49-cea1-437d-8b81-6947b3085376","Type":"ContainerStarted","Data":"3ecb8c24e72a883ef77d5f362051e2d21d7c31d49695d99b2fca1515e805c91c"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.420995 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" event={"ID":"5a619d49-cea1-437d-8b81-6947b3085376","Type":"ContainerStarted","Data":"d6f707810a2d71f90723d32eda48e820e7846853b1e06cd4c8e7c0b54ede8f4c"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.428377 4718 generic.go:334] "Generic (PLEG): container finished" podID="cd397098-2850-4c0e-aa5b-5147532da7a5" containerID="1dab4f475f4531dfbbe30bf6a8bf1161758bccf5adb8c4654acb5c505599caa5" exitCode=0 Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.428462 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" event={"ID":"cd397098-2850-4c0e-aa5b-5147532da7a5","Type":"ContainerDied","Data":"1dab4f475f4531dfbbe30bf6a8bf1161758bccf5adb8c4654acb5c505599caa5"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.440758 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" podStartSLOduration=76.44073009 podStartE2EDuration="1m16.44073009s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.438623378 +0000 UTC m=+108.678243222" watchObservedRunningTime="2025-11-23 14:47:37.44073009 +0000 UTC m=+108.680349934" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.443888 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" event={"ID":"00ada162-924a-42f2-85b2-62e5df70027d","Type":"ContainerStarted","Data":"db9ffb998e081ef2da7ba40de1b97b3f6d87442bbf90417972f5dc2240b75b9e"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.443941 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" event={"ID":"00ada162-924a-42f2-85b2-62e5df70027d","Type":"ContainerStarted","Data":"9611da283ac4faaca15983210fbd912939ae4052e259f2c9a4bdf3036c2741ab"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.444803 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.449199 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" event={"ID":"205d682b-7592-4b53-bcf6-0300c1084046","Type":"ContainerStarted","Data":"423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.450108 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.463911 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.467423 4718 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zmppb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.467500 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" podUID="205d682b-7592-4b53-bcf6-0300c1084046" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.467467 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:37.967421827 +0000 UTC m=+109.207041671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.468136 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2vccg" podStartSLOduration=76.468111025 podStartE2EDuration="1m16.468111025s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.467800157 +0000 UTC m=+108.707420001" watchObservedRunningTime="2025-11-23 14:47:37.468111025 +0000 UTC m=+108.707730869" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.471503 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" event={"ID":"bb094bc3-3ea8-4a2b-9f41-61621ba47667","Type":"ContainerStarted","Data":"368bedec27afe6f85f92ca8aaacc6129ac7bb7df0fdb8457422f922b71c9d93c"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.474781 4718 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-42h2c container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.474886 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" podUID="00ada162-924a-42f2-85b2-62e5df70027d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.481556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8m5b" event={"ID":"ff9a491e-b915-4980-92f9-71844bc90a65","Type":"ContainerStarted","Data":"42ce600f911dd2c4e3915c44858ffd57f4d03b67b675858610a9be320b31fea6"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.484754 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" event={"ID":"fa4bc064-a334-47bd-820e-00ced1c89025","Type":"ContainerStarted","Data":"a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.485637 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.505215 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" podStartSLOduration=76.505193404 podStartE2EDuration="1m16.505193404s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.489979058 +0000 UTC m=+108.729598892" watchObservedRunningTime="2025-11-23 14:47:37.505193404 +0000 UTC m=+108.744813248" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.511833 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" event={"ID":"3751d59c-0c3d-4000-b034-7b33039c7930","Type":"ContainerStarted","Data":"09360cdaecca5fd0e37d5696602ad9ef59ea205c0dd6d77a5f047d2aa86d4225"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.513705 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" podStartSLOduration=76.51333450999999 podStartE2EDuration="1m16.51333451s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.512201071 +0000 UTC m=+108.751820915" watchObservedRunningTime="2025-11-23 14:47:37.51333451 +0000 UTC m=+108.752954354" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.538517 4718 generic.go:334] "Generic (PLEG): container finished" podID="7600967f-215f-411c-b19a-8433e7a266ee" containerID="465caaa0d17481d3e2ef93a73e978286d0917505f5f3b2dcacc4f33e84a0b3fd" exitCode=0 Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.540453 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" event={"ID":"7600967f-215f-411c-b19a-8433e7a266ee","Type":"ContainerDied","Data":"465caaa0d17481d3e2ef93a73e978286d0917505f5f3b2dcacc4f33e84a0b3fd"} Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.547423 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-7hg9r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.547485 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7hg9r" podUID="87a6d300-fa67-4762-9025-232fcb2ea96d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.566653 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.570716 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" podStartSLOduration=76.570704464 podStartE2EDuration="1m16.570704464s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.569077043 +0000 UTC m=+108.808696887" watchObservedRunningTime="2025-11-23 14:47:37.570704464 +0000 UTC m=+108.810324298" Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.571682 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.071670889 +0000 UTC m=+109.311290733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.591714 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.607410 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tdwkr" podStartSLOduration=76.607390793 podStartE2EDuration="1m16.607390793s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.606904871 +0000 UTC m=+108.846524715" watchObservedRunningTime="2025-11-23 14:47:37.607390793 +0000 UTC m=+108.847010627" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.672001 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.673272 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.173255282 +0000 UTC m=+109.412875126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.680611 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nfkmp" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.703090 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-45fwt" podStartSLOduration=76.703071778 podStartE2EDuration="1m16.703071778s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.702130294 +0000 UTC m=+108.941750138" watchObservedRunningTime="2025-11-23 14:47:37.703071778 +0000 UTC m=+108.942691622" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.777271 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.777819 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.277808371 +0000 UTC m=+109.517428215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.878391 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.878587 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.378558034 +0000 UTC m=+109.618177878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.878803 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.879152 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.379125218 +0000 UTC m=+109.618745062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.977292 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" podStartSLOduration=76.977260325 podStartE2EDuration="1m16.977260325s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:37.969338954 +0000 UTC m=+109.208958798" watchObservedRunningTime="2025-11-23 14:47:37.977260325 +0000 UTC m=+109.216880169" Nov 23 14:47:37 crc kubenswrapper[4718]: I1123 14:47:37.979822 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:37 crc kubenswrapper[4718]: E1123 14:47:37.980141 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.480125598 +0000 UTC m=+109.719745442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.081249 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.081687 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.58167133 +0000 UTC m=+109.821291175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.182183 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.182319 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.68229929 +0000 UTC m=+109.921919134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.182769 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.183025 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.683017248 +0000 UTC m=+109.922637092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.283545 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.283776 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.78373843 +0000 UTC m=+110.023358274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.283956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.284297 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.784283835 +0000 UTC m=+110.023903679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.334466 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:38 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:38 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:38 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.334537 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.385203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.385423 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.885392036 +0000 UTC m=+110.125011880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.385539 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.385880 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.885870188 +0000 UTC m=+110.125490032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.486687 4718 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hnvnt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.486756 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" podUID="fa4bc064-a334-47bd-820e-00ced1c89025" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.487204 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.487367 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.98734826 +0000 UTC m=+110.226968104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.487568 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.487909 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:38.987902203 +0000 UTC m=+110.227522047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.511757 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.511820 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.515208 4718 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gwk4k container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.515285 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" podUID="2635b714-cae2-41c2-8a7b-87075c04e2b3" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.545576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-crvv6" event={"ID":"24a77fa6-f458-4337-bafc-20b5268bc357","Type":"ContainerStarted","Data":"f32810fc71f5509cb6a1ac03d168cf2361a32a1a23c3c86cf841265b29aa9332"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.548315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" event={"ID":"2635b714-cae2-41c2-8a7b-87075c04e2b3","Type":"ContainerStarted","Data":"e1e8d2e2c163ff0b400ddb519b9adead6e0f23721b28323f8c6f90f0a997f603"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.549983 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" event={"ID":"3751d59c-0c3d-4000-b034-7b33039c7930","Type":"ContainerStarted","Data":"263955a58b1939e9642b4cb502cf4f18c9bcc5cb4f9950f28c636b4a21529726"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.550016 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" event={"ID":"3751d59c-0c3d-4000-b034-7b33039c7930","Type":"ContainerStarted","Data":"b1572767a648e15c5a476445b5001f92824e727584e734a3b79a088081ef3e98"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.551723 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jh7tn" event={"ID":"fbbd315e-478d-48fe-8df6-a58a288adba0","Type":"ContainerStarted","Data":"61b35da0ceb895810c7f23efde0fa49f33bb77d6305c9590ab0abff6cbc0be97"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.551822 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.553559 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" event={"ID":"7600967f-215f-411c-b19a-8433e7a266ee","Type":"ContainerStarted","Data":"810523d4405c8fbd0a3f0ac5a27d6015b28b714b3d917af8dced96f140792586"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.553665 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.555193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" event={"ID":"af45cb6f-fed7-415b-bf75-e742a875a1c0","Type":"ContainerStarted","Data":"aab52ead52cb702a603292dab3c1f06c8e7eefc6d9673040e29755f1217823c6"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.556987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" event={"ID":"c94ba48a-8de9-40a2-9c80-d4496b7b8cf6","Type":"ContainerStarted","Data":"6a68269dc84c5ee935aa0add9d9576060f1f41ccff6ffb3d5fed24ba2d0b35e1"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.558332 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" event={"ID":"b6b18826-96e9-4e47-9f95-3f19f298683b","Type":"ContainerStarted","Data":"79f0726694c305a18035ea2469121b38b8093f614ea13fa36690c7fb5aeef263"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.558549 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.559629 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" event={"ID":"69074044-d0c1-4f42-bd99-bce808257377","Type":"ContainerStarted","Data":"f904d1699b0442421b7c60cdbd6d8a9362c48e3edee41054964f314e67d45242"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.561028 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m47rw" event={"ID":"828f7457-8fd5-4c12-9f95-1dd938faacb0","Type":"ContainerStarted","Data":"a251bdedd3c0cf6350536dffa8459bfaee0e2104a64d615552c9e43ab7dad4e4"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.562450 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" event={"ID":"cd397098-2850-4c0e-aa5b-5147532da7a5","Type":"ContainerStarted","Data":"15a735ac774f5c62df21a9f9a3e10276bf85791abb1192228c4583410ab9c65a"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.563924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gptqg" event={"ID":"844845db-0b6a-4365-aa6e-760f4dd1157d","Type":"ContainerStarted","Data":"f1e5822263e0513d7ab54f23bea4d85f5d2e66f09cede2b8fa784770462c7a7c"} Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.564798 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-7hg9r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.564862 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7hg9r" podUID="87a6d300-fa67-4762-9025-232fcb2ea96d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.565924 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qpsmt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.565982 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.571034 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-42h2c" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.575896 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.592210 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.592842 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.092825732 +0000 UTC m=+110.332445566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.595292 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-q52hx" podStartSLOduration=77.595265024 podStartE2EDuration="1m17.595265024s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:38.593591262 +0000 UTC m=+109.833211106" watchObservedRunningTime="2025-11-23 14:47:38.595265024 +0000 UTC m=+109.834884868" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.630722 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.671697 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7x8l8" podStartSLOduration=77.671680859 podStartE2EDuration="1m17.671680859s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:38.653502029 +0000 UTC m=+109.893121873" watchObservedRunningTime="2025-11-23 14:47:38.671680859 +0000 UTC m=+109.911300703" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.694763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.696638 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.196623552 +0000 UTC m=+110.436243396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.798261 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.798452 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.298417561 +0000 UTC m=+110.538037405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.798578 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.798861 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.298848542 +0000 UTC m=+110.538468386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.852349 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jh7tn" podStartSLOduration=7.852335627 podStartE2EDuration="7.852335627s" podCreationTimestamp="2025-11-23 14:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:38.718033885 +0000 UTC m=+109.957653729" watchObservedRunningTime="2025-11-23 14:47:38.852335627 +0000 UTC m=+110.091955471" Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.900030 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.900225 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.40018203 +0000 UTC m=+110.639801874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:38 crc kubenswrapper[4718]: I1123 14:47:38.900269 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:38 crc kubenswrapper[4718]: E1123 14:47:38.900637 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.400629901 +0000 UTC m=+110.640249745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.001182 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.001289 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.501272211 +0000 UTC m=+110.740892065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.001656 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.002356 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.502323618 +0000 UTC m=+110.741943632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.073047 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" podStartSLOduration=78.072997578 podStartE2EDuration="1m18.072997578s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:39.068784231 +0000 UTC m=+110.308404075" watchObservedRunningTime="2025-11-23 14:47:39.072997578 +0000 UTC m=+110.312617422" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.103342 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.103459 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.603431599 +0000 UTC m=+110.843051443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.103721 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.104061 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.604048355 +0000 UTC m=+110.843668199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.150972 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-lglwp" podStartSLOduration=78.150954453 podStartE2EDuration="1m18.150954453s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:39.149383294 +0000 UTC m=+110.389003138" watchObservedRunningTime="2025-11-23 14:47:39.150954453 +0000 UTC m=+110.390574297" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.209263 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.209721 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.709704162 +0000 UTC m=+110.949324006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.250141 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" podStartSLOduration=78.250124086 podStartE2EDuration="1m18.250124086s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:39.208509862 +0000 UTC m=+110.448129706" watchObservedRunningTime="2025-11-23 14:47:39.250124086 +0000 UTC m=+110.489743930" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.284044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.284091 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.302786 4718 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-pfnl4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.302836 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" podUID="cd397098-2850-4c0e-aa5b-5147532da7a5" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.314140 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.314495 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.814482117 +0000 UTC m=+111.054101961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.341708 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:39 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:39 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:39 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.341760 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.415042 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.415335 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:39.915319662 +0000 UTC m=+111.154939506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.486199 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gzctt" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.516199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.516563 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.016546046 +0000 UTC m=+111.256165890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.564423 4718 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hnvnt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.564564 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" podUID="fa4bc064-a334-47bd-820e-00ced1c89025" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.589309 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" event={"ID":"af45cb6f-fed7-415b-bf75-e742a875a1c0","Type":"ContainerStarted","Data":"45fd9529b174ae7f4f031fc54467c364bb6fe470c500c9bb6309aab3721c1b54"} Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.590396 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qpsmt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.590470 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.619236 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.619857 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.119832163 +0000 UTC m=+111.359451997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.721810 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.724252 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.224229818 +0000 UTC m=+111.463849662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.823640 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.823828 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.32379166 +0000 UTC m=+111.563411494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.824397 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.824738 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.324730445 +0000 UTC m=+111.564350289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.928524 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.928580 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.428565975 +0000 UTC m=+111.668185819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.928527 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:39 crc kubenswrapper[4718]: I1123 14:47:39.928913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:39 crc kubenswrapper[4718]: E1123 14:47:39.929145 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.42913862 +0000 UTC m=+111.668758464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.029361 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.029715 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.529701748 +0000 UTC m=+111.769321592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.130647 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.130962 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.630950964 +0000 UTC m=+111.870570808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.231876 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.232069 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.732036965 +0000 UTC m=+111.971656809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.232484 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.232830 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.732822254 +0000 UTC m=+111.972442098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.296702 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.297530 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.301468 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.301647 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.315793 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.333503 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.333857 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.833832064 +0000 UTC m=+112.073451908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.334003 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.334365 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.834353867 +0000 UTC m=+112.073973711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.337481 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:40 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:40 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:40 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.337532 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.434900 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.435137 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.93510897 +0000 UTC m=+112.174728814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.435347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.435472 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.435604 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.435700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.435982 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:40.935971091 +0000 UTC m=+112.175591005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.446213 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c82c8ca1-7a30-47c8-a679-abe265aca15b-metrics-certs\") pod \"network-metrics-daemon-7qh4j\" (UID: \"c82c8ca1-7a30-47c8-a679-abe265aca15b\") " pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.484088 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tblh6"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.484982 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.489585 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.529643 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tblh6"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.551076 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.551275 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.051240982 +0000 UTC m=+112.290860826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.551383 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.551451 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.551527 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.551656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.552070 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.052049932 +0000 UTC m=+112.291669776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.576754 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7qh4j" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.614115 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.639246 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" event={"ID":"af45cb6f-fed7-415b-bf75-e742a875a1c0","Type":"ContainerStarted","Data":"f5410fe64a15da32a966b0a935ca06cdff567dd1a706668520ad24bff7ce7d2c"} Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.654916 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.655150 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkvgs\" (UniqueName: \"kubernetes.io/projected/3f4a73b5-96e0-451e-994f-d7351ee3fef3-kube-api-access-gkvgs\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.655201 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-utilities\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.655225 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-catalog-content\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.655346 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.155330959 +0000 UTC m=+112.394950793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.658300 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k8wqs"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.659184 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.663983 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.679816 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8wqs"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.759186 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkvgs\" (UniqueName: \"kubernetes.io/projected/3f4a73b5-96e0-451e-994f-d7351ee3fef3-kube-api-access-gkvgs\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.759591 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-catalog-content\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.759663 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-utilities\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.759718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-catalog-content\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.759834 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-utilities\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.759923 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.759977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2pcj\" (UniqueName: \"kubernetes.io/projected/80a1d25a-995b-4056-a9e4-cafa5bd6a143-kube-api-access-b2pcj\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.763979 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-utilities\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.764692 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.26467758 +0000 UTC m=+112.504297424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.766515 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-catalog-content\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.812664 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkvgs\" (UniqueName: \"kubernetes.io/projected/3f4a73b5-96e0-451e-994f-d7351ee3fef3-kube-api-access-gkvgs\") pod \"community-operators-tblh6\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.873144 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.873396 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2pcj\" (UniqueName: \"kubernetes.io/projected/80a1d25a-995b-4056-a9e4-cafa5bd6a143-kube-api-access-b2pcj\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.873457 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-catalog-content\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.873535 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.373516438 +0000 UTC m=+112.613136282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.873576 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-utilities\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.873855 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-catalog-content\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.875039 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqkrd"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.875898 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.883781 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-utilities\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.891245 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqkrd"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.910720 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.912844 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2pcj\" (UniqueName: \"kubernetes.io/projected/80a1d25a-995b-4056-a9e4-cafa5bd6a143-kube-api-access-b2pcj\") pod \"certified-operators-k8wqs\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.967323 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7qh4j"] Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.975089 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-utilities\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.975150 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-catalog-content\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.975179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfwj\" (UniqueName: \"kubernetes.io/projected/2fc35da2-aa04-4038-aaf9-3483211b1daa-kube-api-access-7rfwj\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.975307 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:40 crc kubenswrapper[4718]: E1123 14:47:40.975655 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.475639775 +0000 UTC m=+112.715259619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:40 crc kubenswrapper[4718]: I1123 14:47:40.977206 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.061733 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mr8cp"] Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.062626 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.076219 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.076567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-utilities\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.076613 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.576590573 +0000 UTC m=+112.816210417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.076652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rfwj\" (UniqueName: \"kubernetes.io/projected/2fc35da2-aa04-4038-aaf9-3483211b1daa-kube-api-access-7rfwj\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.076692 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-catalog-content\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.077295 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-utilities\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.077351 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-catalog-content\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.083222 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mr8cp"] Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.098804 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.103293 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rfwj\" (UniqueName: \"kubernetes.io/projected/2fc35da2-aa04-4038-aaf9-3483211b1daa-kube-api-access-7rfwj\") pod \"community-operators-hqkrd\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.177892 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v7np\" (UniqueName: \"kubernetes.io/projected/18dd40c2-0fcf-4e42-b711-fe7cac112584-kube-api-access-6v7np\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.177962 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.177990 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-catalog-content\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.178029 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-utilities\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.178339 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.678326381 +0000 UTC m=+112.917946225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.204939 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.254687 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.279029 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.279229 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v7np\" (UniqueName: \"kubernetes.io/projected/18dd40c2-0fcf-4e42-b711-fe7cac112584-kube-api-access-6v7np\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.279295 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-catalog-content\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.279327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-utilities\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.279653 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.779638728 +0000 UTC m=+113.019258572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.279659 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-utilities\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.279859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-catalog-content\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.300555 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v7np\" (UniqueName: \"kubernetes.io/projected/18dd40c2-0fcf-4e42-b711-fe7cac112584-kube-api-access-6v7np\") pod \"certified-operators-mr8cp\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.307846 4718 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.336423 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:41 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:41 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:41 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.336492 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.382273 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tblh6"] Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.382594 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.382962 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.383255 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.883243143 +0000 UTC m=+113.122862987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: W1123 14:47:41.419406 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4a73b5_96e0_451e_994f_d7351ee3fef3.slice/crio-7f3d8816e36738d564ea92467109b8efb073ea1420d17fd790a04b8b6a1767dc WatchSource:0}: Error finding container 7f3d8816e36738d564ea92467109b8efb073ea1420d17fd790a04b8b6a1767dc: Status 404 returned error can't find the container with id 7f3d8816e36738d564ea92467109b8efb073ea1420d17fd790a04b8b6a1767dc Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.483826 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.484199 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:41.98418407 +0000 UTC m=+113.223803914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.488352 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqkrd"] Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.501119 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8wqs"] Nov 23 14:47:41 crc kubenswrapper[4718]: W1123 14:47:41.511677 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc35da2_aa04_4038_aaf9_3483211b1daa.slice/crio-a2a9a6e30675f32a5d2c0a64fa4b62a43b8ef464002c5f9a774ff7383f3d2a1e WatchSource:0}: Error finding container a2a9a6e30675f32a5d2c0a64fa4b62a43b8ef464002c5f9a774ff7383f3d2a1e: Status 404 returned error can't find the container with id a2a9a6e30675f32a5d2c0a64fa4b62a43b8ef464002c5f9a774ff7383f3d2a1e Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.584941 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.586772 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:42.086759439 +0000 UTC m=+113.326379283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.637470 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mr8cp"] Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.646912 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"aa91d87c-dcfa-4467-84bb-cdbb0599176b","Type":"ContainerStarted","Data":"d012ab2dcb1225154253ce1cff48b3e28cbc67e16c10e533f7a7d710ab2c6f49"} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.654548 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" event={"ID":"af45cb6f-fed7-415b-bf75-e742a875a1c0","Type":"ContainerStarted","Data":"e4d6cc30926ebce489d88c67e05f070de1f36728dee7441919a01c8904aefb54"} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.669184 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkrd" event={"ID":"2fc35da2-aa04-4038-aaf9-3483211b1daa","Type":"ContainerStarted","Data":"a2a9a6e30675f32a5d2c0a64fa4b62a43b8ef464002c5f9a774ff7383f3d2a1e"} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.670612 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wqs" event={"ID":"80a1d25a-995b-4056-a9e4-cafa5bd6a143","Type":"ContainerStarted","Data":"0c74690e2bb2ad410b133df2c18ac7b753c3dcbe9e37a9873ae387cdb9708e95"} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.671914 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" event={"ID":"c82c8ca1-7a30-47c8-a679-abe265aca15b","Type":"ContainerStarted","Data":"2c1320e9b14a20314fb5cf838f70b9b8407bd8ade2cf664ae16b920c5866264e"} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.673747 4718 generic.go:334] "Generic (PLEG): container finished" podID="03f55fc0-e04d-4f3a-8869-80cbb53c26ee" containerID="91b22b238a43688558fa254177d0b46f77b76822e841892653cadc58961a9f8f" exitCode=0 Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.673789 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" event={"ID":"03f55fc0-e04d-4f3a-8869-80cbb53c26ee","Type":"ContainerDied","Data":"91b22b238a43688558fa254177d0b46f77b76822e841892653cadc58961a9f8f"} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.676576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tblh6" event={"ID":"3f4a73b5-96e0-451e-994f-d7351ee3fef3","Type":"ContainerStarted","Data":"7f3d8816e36738d564ea92467109b8efb073ea1420d17fd790a04b8b6a1767dc"} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.694408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.694759 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 14:47:42.194745186 +0000 UTC m=+113.434365030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.795989 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:41 crc kubenswrapper[4718]: E1123 14:47:41.796404 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 14:47:42.29638954 +0000 UTC m=+113.536009384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tsbrh" (UID: "108ad2a6-0176-40d5-9252-577047cea58d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.800757 4718 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-23T14:47:41.307876163Z","Handler":null,"Name":""} Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.807232 4718 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.807259 4718 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.897069 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.902029 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 14:47:41 crc kubenswrapper[4718]: I1123 14:47:41.999136 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.004723 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.004757 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.056221 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.062921 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.104700 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.105008 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.106871 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.106928 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.108837 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.121124 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tsbrh\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.207974 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.208017 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.208092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.229398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.311967 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.337115 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:42 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:42 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:42 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.337177 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.456241 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.497781 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.517496 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tsbrh"] Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.666901 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zjlxq"] Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.668238 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.670453 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.679772 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjlxq"] Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.696619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" event={"ID":"c82c8ca1-7a30-47c8-a679-abe265aca15b","Type":"ContainerStarted","Data":"38049877e79ee00a60d52fd76bfed317c9b20777150a33ba7513efd90f8505b7"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.696695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7qh4j" event={"ID":"c82c8ca1-7a30-47c8-a679-abe265aca15b","Type":"ContainerStarted","Data":"36154076bde16088b30b94302755dc20d962583df6bedcf67b3e08e02f71cfc6"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.699500 4718 generic.go:334] "Generic (PLEG): container finished" podID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerID="01bf59c13259fc34277cec3dde7436d4b0220cad482a57ff844ea52e87496aed" exitCode=0 Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.699592 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tblh6" event={"ID":"3f4a73b5-96e0-451e-994f-d7351ee3fef3","Type":"ContainerDied","Data":"01bf59c13259fc34277cec3dde7436d4b0220cad482a57ff844ea52e87496aed"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.707472 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.710029 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa91d87c-dcfa-4467-84bb-cdbb0599176b" containerID="87ac83749d440305f5d235a4d373c155345dd8c23f813646a2e61519fad7289b" exitCode=0 Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.710111 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"aa91d87c-dcfa-4467-84bb-cdbb0599176b","Type":"ContainerDied","Data":"87ac83749d440305f5d235a4d373c155345dd8c23f813646a2e61519fad7289b"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.716082 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" event={"ID":"108ad2a6-0176-40d5-9252-577047cea58d","Type":"ContainerStarted","Data":"c997546a855a639a2314ecea956ee30ef1003a24a440f44df413218a88770ab2"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.718954 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhsc\" (UniqueName: \"kubernetes.io/projected/f93bf2f1-4f3a-44da-accb-10f3376e58db-kube-api-access-2rhsc\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.719003 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-catalog-content\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.719051 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-utilities\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.728498 4718 generic.go:334] "Generic (PLEG): container finished" podID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerID="f330c3fe177f5f8f0f6ad191f729545cee50c5047b622ef6308c3a99984aa6a2" exitCode=0 Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.728652 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkrd" event={"ID":"2fc35da2-aa04-4038-aaf9-3483211b1daa","Type":"ContainerDied","Data":"f330c3fe177f5f8f0f6ad191f729545cee50c5047b622ef6308c3a99984aa6a2"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.733504 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wqs" event={"ID":"80a1d25a-995b-4056-a9e4-cafa5bd6a143","Type":"ContainerDied","Data":"3be0ed3d8b82ebd170b96dabf3d35f011f3a8c3275e53b1900539de78d736e67"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.733507 4718 generic.go:334] "Generic (PLEG): container finished" podID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerID="3be0ed3d8b82ebd170b96dabf3d35f011f3a8c3275e53b1900539de78d736e67" exitCode=0 Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.734854 4718 generic.go:334] "Generic (PLEG): container finished" podID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerID="b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1" exitCode=0 Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.735705 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr8cp" event={"ID":"18dd40c2-0fcf-4e42-b711-fe7cac112584","Type":"ContainerDied","Data":"b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.735755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr8cp" event={"ID":"18dd40c2-0fcf-4e42-b711-fe7cac112584","Type":"ContainerStarted","Data":"cb5e42f681759ee3121865d83f4e31972ef0098deacc792cefefdc93650e08b5"} Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.745510 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7qh4j" podStartSLOduration=81.745490118 podStartE2EDuration="1m21.745490118s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:42.726483206 +0000 UTC m=+113.966103050" watchObservedRunningTime="2025-11-23 14:47:42.745490118 +0000 UTC m=+113.985109962" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.760851 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.820405 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-catalog-content\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.820482 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-utilities\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.820595 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rhsc\" (UniqueName: \"kubernetes.io/projected/f93bf2f1-4f3a-44da-accb-10f3376e58db-kube-api-access-2rhsc\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.822298 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-catalog-content\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.822594 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-utilities\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.851280 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rhsc\" (UniqueName: \"kubernetes.io/projected/f93bf2f1-4f3a-44da-accb-10f3376e58db-kube-api-access-2rhsc\") pod \"redhat-marketplace-zjlxq\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.958675 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mlwtl" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.976266 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5qvjc" podStartSLOduration=11.976247385 podStartE2EDuration="11.976247385s" podCreationTimestamp="2025-11-23 14:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:42.843293826 +0000 UTC m=+114.082913670" watchObservedRunningTime="2025-11-23 14:47:42.976247385 +0000 UTC m=+114.215867229" Nov 23 14:47:42 crc kubenswrapper[4718]: I1123 14:47:42.980101 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.008621 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.054259 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgp86"] Nov 23 14:47:43 crc kubenswrapper[4718]: E1123 14:47:43.054501 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f55fc0-e04d-4f3a-8869-80cbb53c26ee" containerName="collect-profiles" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.054512 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f55fc0-e04d-4f3a-8869-80cbb53c26ee" containerName="collect-profiles" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.054627 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f55fc0-e04d-4f3a-8869-80cbb53c26ee" containerName="collect-profiles" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.055344 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.062956 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgp86"] Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.126109 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-config-volume\") pod \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.126182 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr4lk\" (UniqueName: \"kubernetes.io/projected/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-kube-api-access-gr4lk\") pod \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.126590 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-secret-volume\") pod \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\" (UID: \"03f55fc0-e04d-4f3a-8869-80cbb53c26ee\") " Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.126813 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-catalog-content\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.126896 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmdwt\" (UniqueName: \"kubernetes.io/projected/e55e1134-e6df-4b01-ace8-c84d74fdea73-kube-api-access-tmdwt\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.126933 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-utilities\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.128318 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "03f55fc0-e04d-4f3a-8869-80cbb53c26ee" (UID: "03f55fc0-e04d-4f3a-8869-80cbb53c26ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.165114 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "03f55fc0-e04d-4f3a-8869-80cbb53c26ee" (UID: "03f55fc0-e04d-4f3a-8869-80cbb53c26ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.165334 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-kube-api-access-gr4lk" (OuterVolumeSpecName: "kube-api-access-gr4lk") pod "03f55fc0-e04d-4f3a-8869-80cbb53c26ee" (UID: "03f55fc0-e04d-4f3a-8869-80cbb53c26ee"). InnerVolumeSpecName "kube-api-access-gr4lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.228488 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmdwt\" (UniqueName: \"kubernetes.io/projected/e55e1134-e6df-4b01-ace8-c84d74fdea73-kube-api-access-tmdwt\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.228586 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-utilities\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.228752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-catalog-content\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.228821 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.228846 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.228867 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr4lk\" (UniqueName: \"kubernetes.io/projected/03f55fc0-e04d-4f3a-8869-80cbb53c26ee-kube-api-access-gr4lk\") on node \"crc\" DevicePath \"\"" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.229700 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-catalog-content\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.229925 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-utilities\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.273649 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmdwt\" (UniqueName: \"kubernetes.io/projected/e55e1134-e6df-4b01-ace8-c84d74fdea73-kube-api-access-tmdwt\") pod \"redhat-marketplace-fgp86\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.329555 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjlxq"] Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.333162 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:43 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:43 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:43 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.333213 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.380405 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.517824 4718 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gwk4k container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]log ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]etcd ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/generic-apiserver-start-informers ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/max-in-flight-filter ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 23 14:47:43 crc kubenswrapper[4718]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/project.openshift.io-projectcache ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/openshift.io-startinformers ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 23 14:47:43 crc kubenswrapper[4718]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 23 14:47:43 crc kubenswrapper[4718]: livez check failed Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.517880 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" podUID="2635b714-cae2-41c2-8a7b-87075c04e2b3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.584092 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-7hg9r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.584542 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7hg9r" podUID="87a6d300-fa67-4762-9025-232fcb2ea96d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.584153 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-7hg9r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.584902 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7hg9r" podUID="87a6d300-fa67-4762-9025-232fcb2ea96d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.656118 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hjl5"] Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.657174 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.664166 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.685584 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hjl5"] Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.744181 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4408fab7-4229-429f-9d07-b77d2ddb7fd1","Type":"ContainerStarted","Data":"690d65ea1a3878823251af477334561bdbc6a8e27ca9d63fa644735d766b5e3e"} Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.744235 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4408fab7-4229-429f-9d07-b77d2ddb7fd1","Type":"ContainerStarted","Data":"4b63468a6960988c860084dd48dc333b000a5b3567453e96ae2f88676a9c037d"} Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.747284 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" event={"ID":"108ad2a6-0176-40d5-9252-577047cea58d","Type":"ContainerStarted","Data":"5bd438efdef5a0ce0c86854ef5284103ba2989a0ba6566153b37608f11cd73ca"} Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.747635 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.751538 4718 generic.go:334] "Generic (PLEG): container finished" podID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerID="394cb180f9150b7ea7e00af2ff924c92d52a12d79db788a98447c0d7617ec208" exitCode=0 Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.751589 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjlxq" event={"ID":"f93bf2f1-4f3a-44da-accb-10f3376e58db","Type":"ContainerDied","Data":"394cb180f9150b7ea7e00af2ff924c92d52a12d79db788a98447c0d7617ec208"} Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.751610 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjlxq" event={"ID":"f93bf2f1-4f3a-44da-accb-10f3376e58db","Type":"ContainerStarted","Data":"bb5f9e27a2f049e19ebeb731b445f0f0425dce65771bc3d6688ae163ffae0c28"} Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.760300 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.761207 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c" event={"ID":"03f55fc0-e04d-4f3a-8869-80cbb53c26ee","Type":"ContainerDied","Data":"f12ca1ac96a2e3d42c6a13427a8a0ae9ddc9b091daa3f3ce9e2813c727b1bd05"} Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.761244 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f12ca1ac96a2e3d42c6a13427a8a0ae9ddc9b091daa3f3ce9e2813c727b1bd05" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.769359 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.769343939 podStartE2EDuration="1.769343939s" podCreationTimestamp="2025-11-23 14:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:43.758090454 +0000 UTC m=+114.997710298" watchObservedRunningTime="2025-11-23 14:47:43.769343939 +0000 UTC m=+115.008963783" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.781828 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" podStartSLOduration=82.781805875 podStartE2EDuration="1m22.781805875s" podCreationTimestamp="2025-11-23 14:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:47:43.775541627 +0000 UTC m=+115.015161471" watchObservedRunningTime="2025-11-23 14:47:43.781805875 +0000 UTC m=+115.021425719" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.838793 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-utilities\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.838836 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj2zf\" (UniqueName: \"kubernetes.io/projected/b70d270d-3157-42c4-ba7e-b3b2755349a7-kube-api-access-dj2zf\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.838940 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-catalog-content\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.862667 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.863002 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.874928 4718 patch_prober.go:28] interesting pod/console-f9d7485db-bzr6j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.874975 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bzr6j" podUID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.908250 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgp86"] Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.939660 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-catalog-content\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.939736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-utilities\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.939759 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj2zf\" (UniqueName: \"kubernetes.io/projected/b70d270d-3157-42c4-ba7e-b3b2755349a7-kube-api-access-dj2zf\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.940866 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-utilities\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.941462 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-catalog-content\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:43 crc kubenswrapper[4718]: I1123 14:47:43.996460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj2zf\" (UniqueName: \"kubernetes.io/projected/b70d270d-3157-42c4-ba7e-b3b2755349a7-kube-api-access-dj2zf\") pod \"redhat-operators-4hjl5\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.058256 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4v55m"] Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.059558 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.069977 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4v55m"] Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.083985 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.148244 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-catalog-content\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.148490 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-utilities\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.148653 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pplg\" (UniqueName: \"kubernetes.io/projected/a64de119-1bd0-4a8f-87d2-64e130749dc7-kube-api-access-2pplg\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.249280 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kubelet-dir\") pod \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.249386 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kube-api-access\") pod \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\" (UID: \"aa91d87c-dcfa-4467-84bb-cdbb0599176b\") " Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.249795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-utilities\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.249904 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pplg\" (UniqueName: \"kubernetes.io/projected/a64de119-1bd0-4a8f-87d2-64e130749dc7-kube-api-access-2pplg\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.250007 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-catalog-content\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.250747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-catalog-content\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.250874 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aa91d87c-dcfa-4467-84bb-cdbb0599176b" (UID: "aa91d87c-dcfa-4467-84bb-cdbb0599176b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.252632 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-utilities\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.256276 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aa91d87c-dcfa-4467-84bb-cdbb0599176b" (UID: "aa91d87c-dcfa-4467-84bb-cdbb0599176b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.273074 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pplg\" (UniqueName: \"kubernetes.io/projected/a64de119-1bd0-4a8f-87d2-64e130749dc7-kube-api-access-2pplg\") pod \"redhat-operators-4v55m\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.285942 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.288116 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.295500 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfnl4" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.330640 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.337089 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:44 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:44 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:44 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.337130 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.353102 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.353137 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa91d87c-dcfa-4467-84bb-cdbb0599176b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.391550 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.576848 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hjl5"] Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.715262 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4v55m"] Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.756384 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.771690 4718 generic.go:334] "Generic (PLEG): container finished" podID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerID="31cd3fd4f2314b4d299f7aa219dae06b68916c17bb2689431b999cbfb13dea72" exitCode=0 Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.771790 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgp86" event={"ID":"e55e1134-e6df-4b01-ace8-c84d74fdea73","Type":"ContainerDied","Data":"31cd3fd4f2314b4d299f7aa219dae06b68916c17bb2689431b999cbfb13dea72"} Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.771818 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgp86" event={"ID":"e55e1134-e6df-4b01-ace8-c84d74fdea73","Type":"ContainerStarted","Data":"9cdb76756181e9c61517176c7a813b8663f7d948a8c849c3a85ebda0efae94a9"} Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.787393 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"aa91d87c-dcfa-4467-84bb-cdbb0599176b","Type":"ContainerDied","Data":"d012ab2dcb1225154253ce1cff48b3e28cbc67e16c10e533f7a7d710ab2c6f49"} Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.787429 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d012ab2dcb1225154253ce1cff48b3e28cbc67e16c10e533f7a7d710ab2c6f49" Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.787529 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 14:47:44 crc kubenswrapper[4718]: W1123 14:47:44.789083 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64de119_1bd0_4a8f_87d2_64e130749dc7.slice/crio-ee347229747b0ded0b118bb3bb641c73e718fe9a86fd09f7d5b93fcdff093581 WatchSource:0}: Error finding container ee347229747b0ded0b118bb3bb641c73e718fe9a86fd09f7d5b93fcdff093581: Status 404 returned error can't find the container with id ee347229747b0ded0b118bb3bb641c73e718fe9a86fd09f7d5b93fcdff093581 Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.793285 4718 generic.go:334] "Generic (PLEG): container finished" podID="4408fab7-4229-429f-9d07-b77d2ddb7fd1" containerID="690d65ea1a3878823251af477334561bdbc6a8e27ca9d63fa644735d766b5e3e" exitCode=0 Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.793480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4408fab7-4229-429f-9d07-b77d2ddb7fd1","Type":"ContainerDied","Data":"690d65ea1a3878823251af477334561bdbc6a8e27ca9d63fa644735d766b5e3e"} Nov 23 14:47:44 crc kubenswrapper[4718]: I1123 14:47:44.820356 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hjl5" event={"ID":"b70d270d-3157-42c4-ba7e-b3b2755349a7","Type":"ContainerStarted","Data":"533eebe4c902b36eed3275b7685facc8bbf6a526954032ee1e038acca8bb77c3"} Nov 23 14:47:45 crc kubenswrapper[4718]: I1123 14:47:45.333822 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:45 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:45 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:45 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:45 crc kubenswrapper[4718]: I1123 14:47:45.333891 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:45 crc kubenswrapper[4718]: I1123 14:47:45.833504 4718 generic.go:334] "Generic (PLEG): container finished" podID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerID="d09d70726bb68f7ce4dfad8c82aea5e51db729b1a8ce4f3da661d5550dbf48ad" exitCode=0 Nov 23 14:47:45 crc kubenswrapper[4718]: I1123 14:47:45.833691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v55m" event={"ID":"a64de119-1bd0-4a8f-87d2-64e130749dc7","Type":"ContainerDied","Data":"d09d70726bb68f7ce4dfad8c82aea5e51db729b1a8ce4f3da661d5550dbf48ad"} Nov 23 14:47:45 crc kubenswrapper[4718]: I1123 14:47:45.833722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v55m" event={"ID":"a64de119-1bd0-4a8f-87d2-64e130749dc7","Type":"ContainerStarted","Data":"ee347229747b0ded0b118bb3bb641c73e718fe9a86fd09f7d5b93fcdff093581"} Nov 23 14:47:45 crc kubenswrapper[4718]: I1123 14:47:45.843231 4718 generic.go:334] "Generic (PLEG): container finished" podID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerID="00d12b2acbdf6266632dd87ae0b0986708e31c2239be92bde847c69eb99f7dbd" exitCode=0 Nov 23 14:47:45 crc kubenswrapper[4718]: I1123 14:47:45.843574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hjl5" event={"ID":"b70d270d-3157-42c4-ba7e-b3b2755349a7","Type":"ContainerDied","Data":"00d12b2acbdf6266632dd87ae0b0986708e31c2239be92bde847c69eb99f7dbd"} Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.172816 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.286283 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kube-api-access\") pod \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.286407 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kubelet-dir\") pod \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\" (UID: \"4408fab7-4229-429f-9d07-b77d2ddb7fd1\") " Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.286679 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4408fab7-4229-429f-9d07-b77d2ddb7fd1" (UID: "4408fab7-4229-429f-9d07-b77d2ddb7fd1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.286972 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.306798 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4408fab7-4229-429f-9d07-b77d2ddb7fd1" (UID: "4408fab7-4229-429f-9d07-b77d2ddb7fd1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.334192 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:46 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:46 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:46 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.334275 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.388794 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4408fab7-4229-429f-9d07-b77d2ddb7fd1-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.467612 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jh7tn" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.854609 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4408fab7-4229-429f-9d07-b77d2ddb7fd1","Type":"ContainerDied","Data":"4b63468a6960988c860084dd48dc333b000a5b3567453e96ae2f88676a9c037d"} Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.854653 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b63468a6960988c860084dd48dc333b000a5b3567453e96ae2f88676a9c037d" Nov 23 14:47:46 crc kubenswrapper[4718]: I1123 14:47:46.854717 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 14:47:47 crc kubenswrapper[4718]: I1123 14:47:47.332774 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:47 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:47 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:47 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:47 crc kubenswrapper[4718]: I1123 14:47:47.332830 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:48 crc kubenswrapper[4718]: I1123 14:47:48.332791 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:48 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:48 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:48 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:48 crc kubenswrapper[4718]: I1123 14:47:48.333062 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:48 crc kubenswrapper[4718]: I1123 14:47:48.517390 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:48 crc kubenswrapper[4718]: I1123 14:47:48.521455 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gwk4k" Nov 23 14:47:49 crc kubenswrapper[4718]: I1123 14:47:49.331899 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:49 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:49 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:49 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:49 crc kubenswrapper[4718]: I1123 14:47:49.331986 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:50 crc kubenswrapper[4718]: I1123 14:47:50.333016 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:50 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:50 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:50 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:50 crc kubenswrapper[4718]: I1123 14:47:50.333118 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:51 crc kubenswrapper[4718]: I1123 14:47:51.333269 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:51 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:51 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:51 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:51 crc kubenswrapper[4718]: I1123 14:47:51.333524 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:52 crc kubenswrapper[4718]: I1123 14:47:52.332887 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:52 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:52 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:52 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:52 crc kubenswrapper[4718]: I1123 14:47:52.332971 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.111237 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.333856 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:53 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:53 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:53 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.333928 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.582856 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-7hg9r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.582899 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-7hg9r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.582933 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7hg9r" podUID="87a6d300-fa67-4762-9025-232fcb2ea96d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.582971 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7hg9r" podUID="87a6d300-fa67-4762-9025-232fcb2ea96d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.857473 4718 patch_prober.go:28] interesting pod/console-f9d7485db-bzr6j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Nov 23 14:47:53 crc kubenswrapper[4718]: I1123 14:47:53.857761 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bzr6j" podUID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Nov 23 14:47:54 crc kubenswrapper[4718]: I1123 14:47:54.333447 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:54 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:54 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:54 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:54 crc kubenswrapper[4718]: I1123 14:47:54.333533 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:55 crc kubenswrapper[4718]: I1123 14:47:55.332294 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:55 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:55 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:55 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:55 crc kubenswrapper[4718]: I1123 14:47:55.332352 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:56 crc kubenswrapper[4718]: I1123 14:47:56.343804 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:56 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Nov 23 14:47:56 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:56 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:56 crc kubenswrapper[4718]: I1123 14:47:56.343904 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:57 crc kubenswrapper[4718]: I1123 14:47:57.332938 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7f7vb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 14:47:57 crc kubenswrapper[4718]: [+]has-synced ok Nov 23 14:47:57 crc kubenswrapper[4718]: [+]process-running ok Nov 23 14:47:57 crc kubenswrapper[4718]: healthz check failed Nov 23 14:47:57 crc kubenswrapper[4718]: I1123 14:47:57.333362 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7f7vb" podUID="4281782c-70f4-442f-814b-2e60ea9dae88" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 14:47:57 crc kubenswrapper[4718]: I1123 14:47:57.931606 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-rzq5f_0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f/cluster-samples-operator/0.log" Nov 23 14:47:57 crc kubenswrapper[4718]: I1123 14:47:57.931686 4718 generic.go:334] "Generic (PLEG): container finished" podID="0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f" containerID="cb93e265c92075cd99293936079eae63115f4ba61d10ee9015c4375344bbb9bd" exitCode=2 Nov 23 14:47:57 crc kubenswrapper[4718]: I1123 14:47:57.931728 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" event={"ID":"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f","Type":"ContainerDied","Data":"cb93e265c92075cd99293936079eae63115f4ba61d10ee9015c4375344bbb9bd"} Nov 23 14:47:57 crc kubenswrapper[4718]: I1123 14:47:57.932415 4718 scope.go:117] "RemoveContainer" containerID="cb93e265c92075cd99293936079eae63115f4ba61d10ee9015c4375344bbb9bd" Nov 23 14:47:58 crc kubenswrapper[4718]: I1123 14:47:58.333701 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:47:58 crc kubenswrapper[4718]: I1123 14:47:58.336292 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7f7vb" Nov 23 14:48:02 crc kubenswrapper[4718]: I1123 14:48:02.318502 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:48:03 crc kubenswrapper[4718]: I1123 14:48:03.600081 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7hg9r" Nov 23 14:48:03 crc kubenswrapper[4718]: I1123 14:48:03.860314 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:48:03 crc kubenswrapper[4718]: I1123 14:48:03.864662 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:48:14 crc kubenswrapper[4718]: I1123 14:48:14.705289 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czc9t" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.464139 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.464254 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.464402 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.464495 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.467194 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.467224 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.468084 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.476883 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.477735 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.493754 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.494786 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.496329 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.566193 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.590845 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 14:48:22 crc kubenswrapper[4718]: I1123 14:48:22.603123 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 14:48:23 crc kubenswrapper[4718]: I1123 14:48:23.057473 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:48:23 crc kubenswrapper[4718]: I1123 14:48:23.057604 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:48:23 crc kubenswrapper[4718]: E1123 14:48:23.116854 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 14:48:23 crc kubenswrapper[4718]: E1123 14:48:23.117730 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6v7np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mr8cp_openshift-marketplace(18dd40c2-0fcf-4e42-b711-fe7cac112584): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:23 crc kubenswrapper[4718]: E1123 14:48:23.119216 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mr8cp" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" Nov 23 14:48:26 crc kubenswrapper[4718]: E1123 14:48:26.416138 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mr8cp" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" Nov 23 14:48:28 crc kubenswrapper[4718]: E1123 14:48:28.329964 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 14:48:28 crc kubenswrapper[4718]: E1123 14:48:28.330151 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zjlxq_openshift-marketplace(f93bf2f1-4f3a-44da-accb-10f3376e58db): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:28 crc kubenswrapper[4718]: E1123 14:48:28.331482 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zjlxq" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" Nov 23 14:48:28 crc kubenswrapper[4718]: E1123 14:48:28.602106 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 14:48:28 crc kubenswrapper[4718]: E1123 14:48:28.602286 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2pcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k8wqs_openshift-marketplace(80a1d25a-995b-4056-a9e4-cafa5bd6a143): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:28 crc kubenswrapper[4718]: E1123 14:48:28.603661 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k8wqs" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" Nov 23 14:48:34 crc kubenswrapper[4718]: E1123 14:48:34.014690 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zjlxq" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" Nov 23 14:48:34 crc kubenswrapper[4718]: E1123 14:48:34.014803 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k8wqs" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" Nov 23 14:48:34 crc kubenswrapper[4718]: E1123 14:48:34.082476 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 14:48:34 crc kubenswrapper[4718]: E1123 14:48:34.083049 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rfwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hqkrd_openshift-marketplace(2fc35da2-aa04-4038-aaf9-3483211b1daa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:34 crc kubenswrapper[4718]: E1123 14:48:34.085371 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hqkrd" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.751212 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hqkrd" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.817251 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.817713 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj2zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4hjl5_openshift-marketplace(b70d270d-3157-42c4-ba7e-b3b2755349a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.819395 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4hjl5" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.832753 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.832981 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pplg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4v55m_openshift-marketplace(a64de119-1bd0-4a8f-87d2-64e130749dc7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.833044 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.833120 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkvgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tblh6_openshift-marketplace(3f4a73b5-96e0-451e-994f-d7351ee3fef3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.834776 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tblh6" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.834835 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4v55m" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.843139 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.843284 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmdwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fgp86_openshift-marketplace(e55e1134-e6df-4b01-ace8-c84d74fdea73): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 14:48:36 crc kubenswrapper[4718]: E1123 14:48:36.844450 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fgp86" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" Nov 23 14:48:37 crc kubenswrapper[4718]: I1123 14:48:37.177617 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-rzq5f_0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f/cluster-samples-operator/0.log" Nov 23 14:48:37 crc kubenswrapper[4718]: I1123 14:48:37.178283 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rzq5f" event={"ID":"0e8598b7-32cf-41c5-9a1b-382a5c0bcc2f","Type":"ContainerStarted","Data":"56b03079400de5c76f8c0c8ba4a34acff15c85cb570cd791e261c65a12b11052"} Nov 23 14:48:37 crc kubenswrapper[4718]: I1123 14:48:37.180129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9ba890949996352b1042c6322a2fbc1f588e72299dc40207c0b052173376677f"} Nov 23 14:48:37 crc kubenswrapper[4718]: E1123 14:48:37.181192 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4v55m" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" Nov 23 14:48:37 crc kubenswrapper[4718]: E1123 14:48:37.181847 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4hjl5" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" Nov 23 14:48:37 crc kubenswrapper[4718]: E1123 14:48:37.182017 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fgp86" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" Nov 23 14:48:37 crc kubenswrapper[4718]: E1123 14:48:37.182399 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tblh6" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" Nov 23 14:48:37 crc kubenswrapper[4718]: W1123 14:48:37.313421 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-69094dc0d3e415d9e245288817b72dae0405570a36f6bf0d161d08ddd4f2a7d4 WatchSource:0}: Error finding container 69094dc0d3e415d9e245288817b72dae0405570a36f6bf0d161d08ddd4f2a7d4: Status 404 returned error can't find the container with id 69094dc0d3e415d9e245288817b72dae0405570a36f6bf0d161d08ddd4f2a7d4 Nov 23 14:48:38 crc kubenswrapper[4718]: I1123 14:48:38.185345 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f67190841a58381167e3e2cf148b97cb53db46323287ff1943a67fc6eeeccdb8"} Nov 23 14:48:38 crc kubenswrapper[4718]: I1123 14:48:38.187321 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"987c40f76fc22c8e2928a408b9e7020e442195aea58cb0c7c11a0c3370d00785"} Nov 23 14:48:38 crc kubenswrapper[4718]: I1123 14:48:38.187395 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7868e2bb06c7fe09c6d949f374c44c1678907836398d9a5016ed44848bf1bdb8"} Nov 23 14:48:38 crc kubenswrapper[4718]: I1123 14:48:38.187615 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:48:38 crc kubenswrapper[4718]: I1123 14:48:38.188973 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0fb96925ae1420b4163c8841a5fde675f4f1e6fccf956c856b02283971ff08a1"} Nov 23 14:48:38 crc kubenswrapper[4718]: I1123 14:48:38.189027 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"69094dc0d3e415d9e245288817b72dae0405570a36f6bf0d161d08ddd4f2a7d4"} Nov 23 14:48:40 crc kubenswrapper[4718]: I1123 14:48:40.203513 4718 generic.go:334] "Generic (PLEG): container finished" podID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerID="609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944" exitCode=0 Nov 23 14:48:40 crc kubenswrapper[4718]: I1123 14:48:40.203583 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr8cp" event={"ID":"18dd40c2-0fcf-4e42-b711-fe7cac112584","Type":"ContainerDied","Data":"609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944"} Nov 23 14:48:41 crc kubenswrapper[4718]: I1123 14:48:41.213622 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr8cp" event={"ID":"18dd40c2-0fcf-4e42-b711-fe7cac112584","Type":"ContainerStarted","Data":"71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac"} Nov 23 14:48:41 crc kubenswrapper[4718]: I1123 14:48:41.236057 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mr8cp" podStartSLOduration=2.346568667 podStartE2EDuration="1m0.236031177s" podCreationTimestamp="2025-11-23 14:47:41 +0000 UTC" firstStartedPulling="2025-11-23 14:47:42.74164347 +0000 UTC m=+113.981263314" lastFinishedPulling="2025-11-23 14:48:40.63110594 +0000 UTC m=+171.870725824" observedRunningTime="2025-11-23 14:48:41.231910053 +0000 UTC m=+172.471529967" watchObservedRunningTime="2025-11-23 14:48:41.236031177 +0000 UTC m=+172.475651051" Nov 23 14:48:41 crc kubenswrapper[4718]: I1123 14:48:41.383496 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:48:41 crc kubenswrapper[4718]: I1123 14:48:41.383640 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:48:42 crc kubenswrapper[4718]: I1123 14:48:42.552674 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mr8cp" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="registry-server" probeResult="failure" output=< Nov 23 14:48:42 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Nov 23 14:48:42 crc kubenswrapper[4718]: > Nov 23 14:48:48 crc kubenswrapper[4718]: I1123 14:48:48.248857 4718 generic.go:334] "Generic (PLEG): container finished" podID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerID="d7811023c018fd83aa196c417a308b37e0d884c3066f729c23e27c47ac50516d" exitCode=0 Nov 23 14:48:48 crc kubenswrapper[4718]: I1123 14:48:48.248953 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wqs" event={"ID":"80a1d25a-995b-4056-a9e4-cafa5bd6a143","Type":"ContainerDied","Data":"d7811023c018fd83aa196c417a308b37e0d884c3066f729c23e27c47ac50516d"} Nov 23 14:48:50 crc kubenswrapper[4718]: I1123 14:48:50.259731 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wqs" event={"ID":"80a1d25a-995b-4056-a9e4-cafa5bd6a143","Type":"ContainerStarted","Data":"bab1c5d41f44c3e17e279f48b204a710bd10c19fdfa0dc8707454bd4d6041f9b"} Nov 23 14:48:50 crc kubenswrapper[4718]: I1123 14:48:50.277019 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k8wqs" podStartSLOduration=3.660491022 podStartE2EDuration="1m10.277000389s" podCreationTimestamp="2025-11-23 14:47:40 +0000 UTC" firstStartedPulling="2025-11-23 14:47:42.73530133 +0000 UTC m=+113.974921174" lastFinishedPulling="2025-11-23 14:48:49.351810697 +0000 UTC m=+180.591430541" observedRunningTime="2025-11-23 14:48:50.273162592 +0000 UTC m=+181.512782436" watchObservedRunningTime="2025-11-23 14:48:50.277000389 +0000 UTC m=+181.516620233" Nov 23 14:48:50 crc kubenswrapper[4718]: I1123 14:48:50.978527 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:48:50 crc kubenswrapper[4718]: I1123 14:48:50.978611 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:48:51 crc kubenswrapper[4718]: I1123 14:48:51.092716 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:48:51 crc kubenswrapper[4718]: I1123 14:48:51.423547 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:48:51 crc kubenswrapper[4718]: I1123 14:48:51.475020 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:48:52 crc kubenswrapper[4718]: I1123 14:48:52.270146 4718 generic.go:334] "Generic (PLEG): container finished" podID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerID="d113a54cfa74902b6ad40ef6480e62f650a933bcd043801ae82264877342423e" exitCode=0 Nov 23 14:48:52 crc kubenswrapper[4718]: I1123 14:48:52.270215 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjlxq" event={"ID":"f93bf2f1-4f3a-44da-accb-10f3376e58db","Type":"ContainerDied","Data":"d113a54cfa74902b6ad40ef6480e62f650a933bcd043801ae82264877342423e"} Nov 23 14:48:52 crc kubenswrapper[4718]: I1123 14:48:52.272190 4718 generic.go:334] "Generic (PLEG): container finished" podID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerID="236d3ade230d26ce5528eb3dc8127f484944e842669ff80793f26ea7519c0ef5" exitCode=0 Nov 23 14:48:52 crc kubenswrapper[4718]: I1123 14:48:52.272426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkrd" event={"ID":"2fc35da2-aa04-4038-aaf9-3483211b1daa","Type":"ContainerDied","Data":"236d3ade230d26ce5528eb3dc8127f484944e842669ff80793f26ea7519c0ef5"} Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.053044 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.053343 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.281858 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkrd" event={"ID":"2fc35da2-aa04-4038-aaf9-3483211b1daa","Type":"ContainerStarted","Data":"fc26e817969dada390cbf52d547a9cd0dd729b33d26932848e98129da4a65c78"} Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.285470 4718 generic.go:334] "Generic (PLEG): container finished" podID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerID="9ac977c37b3673bb0d6d2d29f938182fdfdef34267aeba7e396c21b8f5b8acc0" exitCode=0 Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.285540 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tblh6" event={"ID":"3f4a73b5-96e0-451e-994f-d7351ee3fef3","Type":"ContainerDied","Data":"9ac977c37b3673bb0d6d2d29f938182fdfdef34267aeba7e396c21b8f5b8acc0"} Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.288632 4718 generic.go:334] "Generic (PLEG): container finished" podID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerID="ce4a532603123351afb2c8fb63747c7e0f27fffdbc3282c2b000fbd52856061b" exitCode=0 Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.288696 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgp86" event={"ID":"e55e1134-e6df-4b01-ace8-c84d74fdea73","Type":"ContainerDied","Data":"ce4a532603123351afb2c8fb63747c7e0f27fffdbc3282c2b000fbd52856061b"} Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.296630 4718 generic.go:334] "Generic (PLEG): container finished" podID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerID="04b29e82c0b36d6d70b8722aeba9e6271ee270649cffadd6c9cbbd51779626da" exitCode=0 Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.296691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hjl5" event={"ID":"b70d270d-3157-42c4-ba7e-b3b2755349a7","Type":"ContainerDied","Data":"04b29e82c0b36d6d70b8722aeba9e6271ee270649cffadd6c9cbbd51779626da"} Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.299527 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjlxq" event={"ID":"f93bf2f1-4f3a-44da-accb-10f3376e58db","Type":"ContainerStarted","Data":"9121d35dcfd8d32e0bcc8a35a6943049e30fbfe87333738475be5c331b53876d"} Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.301588 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqkrd" podStartSLOduration=3.375539661 podStartE2EDuration="1m13.301573193s" podCreationTimestamp="2025-11-23 14:47:40 +0000 UTC" firstStartedPulling="2025-11-23 14:47:42.731685928 +0000 UTC m=+113.971305772" lastFinishedPulling="2025-11-23 14:48:52.65771946 +0000 UTC m=+183.897339304" observedRunningTime="2025-11-23 14:48:53.299042719 +0000 UTC m=+184.538662563" watchObservedRunningTime="2025-11-23 14:48:53.301573193 +0000 UTC m=+184.541193037" Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.398920 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zjlxq" podStartSLOduration=2.430775219 podStartE2EDuration="1m11.39890049s" podCreationTimestamp="2025-11-23 14:47:42 +0000 UTC" firstStartedPulling="2025-11-23 14:47:43.753636651 +0000 UTC m=+114.993256495" lastFinishedPulling="2025-11-23 14:48:52.721761922 +0000 UTC m=+183.961381766" observedRunningTime="2025-11-23 14:48:53.395247867 +0000 UTC m=+184.634867721" watchObservedRunningTime="2025-11-23 14:48:53.39890049 +0000 UTC m=+184.638520344" Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.491646 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mr8cp"] Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.491902 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mr8cp" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="registry-server" containerID="cri-o://71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac" gracePeriod=2 Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.918183 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.972513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-catalog-content\") pod \"18dd40c2-0fcf-4e42-b711-fe7cac112584\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.972611 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-utilities\") pod \"18dd40c2-0fcf-4e42-b711-fe7cac112584\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.972647 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v7np\" (UniqueName: \"kubernetes.io/projected/18dd40c2-0fcf-4e42-b711-fe7cac112584-kube-api-access-6v7np\") pod \"18dd40c2-0fcf-4e42-b711-fe7cac112584\" (UID: \"18dd40c2-0fcf-4e42-b711-fe7cac112584\") " Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.975217 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-utilities" (OuterVolumeSpecName: "utilities") pod "18dd40c2-0fcf-4e42-b711-fe7cac112584" (UID: "18dd40c2-0fcf-4e42-b711-fe7cac112584"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:48:53 crc kubenswrapper[4718]: I1123 14:48:53.980079 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dd40c2-0fcf-4e42-b711-fe7cac112584-kube-api-access-6v7np" (OuterVolumeSpecName: "kube-api-access-6v7np") pod "18dd40c2-0fcf-4e42-b711-fe7cac112584" (UID: "18dd40c2-0fcf-4e42-b711-fe7cac112584"). InnerVolumeSpecName "kube-api-access-6v7np". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.025310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18dd40c2-0fcf-4e42-b711-fe7cac112584" (UID: "18dd40c2-0fcf-4e42-b711-fe7cac112584"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.074929 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.074970 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dd40c2-0fcf-4e42-b711-fe7cac112584-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.074981 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v7np\" (UniqueName: \"kubernetes.io/projected/18dd40c2-0fcf-4e42-b711-fe7cac112584-kube-api-access-6v7np\") on node \"crc\" DevicePath \"\"" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.306338 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hjl5" event={"ID":"b70d270d-3157-42c4-ba7e-b3b2755349a7","Type":"ContainerStarted","Data":"0447dfe7f086ece00c6c1c5a25c1a9892c922d77e99fed571a0c978bcc7e3080"} Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.308900 4718 generic.go:334] "Generic (PLEG): container finished" podID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerID="71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac" exitCode=0 Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.308958 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr8cp" event={"ID":"18dd40c2-0fcf-4e42-b711-fe7cac112584","Type":"ContainerDied","Data":"71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac"} Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.308976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mr8cp" event={"ID":"18dd40c2-0fcf-4e42-b711-fe7cac112584","Type":"ContainerDied","Data":"cb5e42f681759ee3121865d83f4e31972ef0098deacc792cefefdc93650e08b5"} Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.309003 4718 scope.go:117] "RemoveContainer" containerID="71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.308994 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mr8cp" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.310967 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tblh6" event={"ID":"3f4a73b5-96e0-451e-994f-d7351ee3fef3","Type":"ContainerStarted","Data":"465d2444c5d1b82a6725e37aa2b4f81131b3623e7ed32308e680b7f44d3556af"} Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.315479 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgp86" event={"ID":"e55e1134-e6df-4b01-ace8-c84d74fdea73","Type":"ContainerStarted","Data":"a5bda06d5040a2f12185c1b583156100036ca66e6189c8d7ee49f3bab16b3eca"} Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.317577 4718 generic.go:334] "Generic (PLEG): container finished" podID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerID="adb8d1f3a18197ebdc45f28e791fbd0c6fbbb117752305ea18ef00824f362c1d" exitCode=0 Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.317603 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v55m" event={"ID":"a64de119-1bd0-4a8f-87d2-64e130749dc7","Type":"ContainerDied","Data":"adb8d1f3a18197ebdc45f28e791fbd0c6fbbb117752305ea18ef00824f362c1d"} Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.326289 4718 scope.go:117] "RemoveContainer" containerID="609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.335355 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hjl5" podStartSLOduration=3.271936097 podStartE2EDuration="1m11.335336275s" podCreationTimestamp="2025-11-23 14:47:43 +0000 UTC" firstStartedPulling="2025-11-23 14:47:45.846494499 +0000 UTC m=+117.086114343" lastFinishedPulling="2025-11-23 14:48:53.909894667 +0000 UTC m=+185.149514521" observedRunningTime="2025-11-23 14:48:54.333886209 +0000 UTC m=+185.573506063" watchObservedRunningTime="2025-11-23 14:48:54.335336275 +0000 UTC m=+185.574956119" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.347486 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mr8cp"] Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.350722 4718 scope.go:117] "RemoveContainer" containerID="b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.351866 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mr8cp"] Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.369250 4718 scope.go:117] "RemoveContainer" containerID="71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac" Nov 23 14:48:54 crc kubenswrapper[4718]: E1123 14:48:54.370392 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac\": container with ID starting with 71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac not found: ID does not exist" containerID="71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.370441 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac"} err="failed to get container status \"71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac\": rpc error: code = NotFound desc = could not find container \"71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac\": container with ID starting with 71fa2b7ba5635acf81bcc19db22b40a41587d44af282447c0b079ad366c8e1ac not found: ID does not exist" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.370504 4718 scope.go:117] "RemoveContainer" containerID="609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944" Nov 23 14:48:54 crc kubenswrapper[4718]: E1123 14:48:54.370883 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944\": container with ID starting with 609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944 not found: ID does not exist" containerID="609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.370916 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944"} err="failed to get container status \"609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944\": rpc error: code = NotFound desc = could not find container \"609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944\": container with ID starting with 609239e3608f0c423c389f1b2aa4902c757fc11788216e7fdf9caa25de262944 not found: ID does not exist" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.370935 4718 scope.go:117] "RemoveContainer" containerID="b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1" Nov 23 14:48:54 crc kubenswrapper[4718]: E1123 14:48:54.371134 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1\": container with ID starting with b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1 not found: ID does not exist" containerID="b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.371169 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1"} err="failed to get container status \"b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1\": rpc error: code = NotFound desc = could not find container \"b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1\": container with ID starting with b3ff70b6a814f189fb88c1b1d95c5e26f60f8749bf6b8f69df09af9bb1d97cd1 not found: ID does not exist" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.393673 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tblh6" podStartSLOduration=3.244985054 podStartE2EDuration="1m14.393657073s" podCreationTimestamp="2025-11-23 14:47:40 +0000 UTC" firstStartedPulling="2025-11-23 14:47:42.707055995 +0000 UTC m=+113.946675839" lastFinishedPulling="2025-11-23 14:48:53.855728014 +0000 UTC m=+185.095347858" observedRunningTime="2025-11-23 14:48:54.3919394 +0000 UTC m=+185.631559244" watchObservedRunningTime="2025-11-23 14:48:54.393657073 +0000 UTC m=+185.633276917" Nov 23 14:48:54 crc kubenswrapper[4718]: I1123 14:48:54.450153 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" path="/var/lib/kubelet/pods/18dd40c2-0fcf-4e42-b711-fe7cac112584/volumes" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.022007 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.038543 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgp86" podStartSLOduration=9.004347522 podStartE2EDuration="1m18.038524606s" podCreationTimestamp="2025-11-23 14:47:43 +0000 UTC" firstStartedPulling="2025-11-23 14:47:44.777820142 +0000 UTC m=+116.017439986" lastFinishedPulling="2025-11-23 14:48:53.811997236 +0000 UTC m=+185.051617070" observedRunningTime="2025-11-23 14:48:54.412923512 +0000 UTC m=+185.652543356" watchObservedRunningTime="2025-11-23 14:49:01.038524606 +0000 UTC m=+192.278144450" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.100481 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.100538 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.141588 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.206501 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.206561 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.263303 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.407589 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.610679 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.693395 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqkrd"] Nov 23 14:49:01 crc kubenswrapper[4718]: I1123 14:49:01.725128 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hnvnt"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.856643 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8wqs"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.857532 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k8wqs" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="registry-server" containerID="cri-o://bab1c5d41f44c3e17e279f48b204a710bd10c19fdfa0dc8707454bd4d6041f9b" gracePeriod=30 Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.871652 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tblh6"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.876757 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qpsmt"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.876958 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" containerID="cri-o://49c65404c950eabfaf34b7dd1c1f8ea41441d42b7b161abfae4f1559d4d6acab" gracePeriod=30 Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.889195 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgp86"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.889552 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgp86" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="registry-server" containerID="cri-o://a5bda06d5040a2f12185c1b583156100036ca66e6189c8d7ee49f3bab16b3eca" gracePeriod=30 Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.892775 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rdkgp"] Nov 23 14:49:02 crc kubenswrapper[4718]: E1123 14:49:02.893030 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4408fab7-4229-429f-9d07-b77d2ddb7fd1" containerName="pruner" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.893047 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4408fab7-4229-429f-9d07-b77d2ddb7fd1" containerName="pruner" Nov 23 14:49:02 crc kubenswrapper[4718]: E1123 14:49:02.893822 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa91d87c-dcfa-4467-84bb-cdbb0599176b" containerName="pruner" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.893836 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa91d87c-dcfa-4467-84bb-cdbb0599176b" containerName="pruner" Nov 23 14:49:02 crc kubenswrapper[4718]: E1123 14:49:02.893850 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="extract-content" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.893897 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="extract-content" Nov 23 14:49:02 crc kubenswrapper[4718]: E1123 14:49:02.893908 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="extract-utilities" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.893915 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="extract-utilities" Nov 23 14:49:02 crc kubenswrapper[4718]: E1123 14:49:02.893922 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="registry-server" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.893928 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="registry-server" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.894063 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4408fab7-4229-429f-9d07-b77d2ddb7fd1" containerName="pruner" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.894074 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa91d87c-dcfa-4467-84bb-cdbb0599176b" containerName="pruner" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.894083 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="18dd40c2-0fcf-4e42-b711-fe7cac112584" containerName="registry-server" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.894537 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.902008 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjlxq"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.902598 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zjlxq" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="registry-server" containerID="cri-o://9121d35dcfd8d32e0bcc8a35a6943049e30fbfe87333738475be5c331b53876d" gracePeriod=30 Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.907782 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rdkgp"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.923560 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hjl5"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.923887 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4hjl5" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="registry-server" containerID="cri-o://0447dfe7f086ece00c6c1c5a25c1a9892c922d77e99fed571a0c978bcc7e3080" gracePeriod=30 Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.925741 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4v55m"] Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.988666 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.988717 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:02 crc kubenswrapper[4718]: I1123 14:49:02.988753 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm22b\" (UniqueName: \"kubernetes.io/projected/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-kube-api-access-mm22b\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.009418 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.089987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.090051 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.090086 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm22b\" (UniqueName: \"kubernetes.io/projected/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-kube-api-access-mm22b\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.092388 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.095627 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.106803 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm22b\" (UniqueName: \"kubernetes.io/projected/33a0a6b4-7aa0-4718-80f1-2d13fae9e761-kube-api-access-mm22b\") pod \"marketplace-operator-79b997595-rdkgp\" (UID: \"33a0a6b4-7aa0-4718-80f1-2d13fae9e761\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.219054 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.369803 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v55m" event={"ID":"a64de119-1bd0-4a8f-87d2-64e130749dc7","Type":"ContainerStarted","Data":"139218d20643f37a19afd5fa87deb7d39d20de9facdc9d0d925e81f40f782f15"} Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.370019 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tblh6" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="registry-server" containerID="cri-o://465d2444c5d1b82a6725e37aa2b4f81131b3623e7ed32308e680b7f44d3556af" gracePeriod=30 Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.370163 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hqkrd" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="registry-server" containerID="cri-o://fc26e817969dada390cbf52d547a9cd0dd729b33d26932848e98129da4a65c78" gracePeriod=2 Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.384744 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:49:03 crc kubenswrapper[4718]: I1123 14:49:03.639898 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rdkgp"] Nov 23 14:49:03 crc kubenswrapper[4718]: W1123 14:49:03.646851 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33a0a6b4_7aa0_4718_80f1_2d13fae9e761.slice/crio-036a72d0379a32b1925721d530efae293b633760c6a5b1ae1c31e24bfe217156 WatchSource:0}: Error finding container 036a72d0379a32b1925721d530efae293b633760c6a5b1ae1c31e24bfe217156: Status 404 returned error can't find the container with id 036a72d0379a32b1925721d530efae293b633760c6a5b1ae1c31e24bfe217156 Nov 23 14:49:04 crc kubenswrapper[4718]: I1123 14:49:04.286056 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:49:04 crc kubenswrapper[4718]: I1123 14:49:04.377056 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" event={"ID":"33a0a6b4-7aa0-4718-80f1-2d13fae9e761","Type":"ContainerStarted","Data":"036a72d0379a32b1925721d530efae293b633760c6a5b1ae1c31e24bfe217156"} Nov 23 14:49:04 crc kubenswrapper[4718]: I1123 14:49:04.377191 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4v55m" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="registry-server" containerID="cri-o://139218d20643f37a19afd5fa87deb7d39d20de9facdc9d0d925e81f40f782f15" gracePeriod=30 Nov 23 14:49:04 crc kubenswrapper[4718]: I1123 14:49:04.392164 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:49:04 crc kubenswrapper[4718]: I1123 14:49:04.397120 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4v55m" podStartSLOduration=4.386735466 podStartE2EDuration="1m20.397089268s" podCreationTimestamp="2025-11-23 14:47:44 +0000 UTC" firstStartedPulling="2025-11-23 14:47:45.84063745 +0000 UTC m=+117.080257294" lastFinishedPulling="2025-11-23 14:49:01.850991252 +0000 UTC m=+193.090611096" observedRunningTime="2025-11-23 14:49:04.39369263 +0000 UTC m=+195.633312494" watchObservedRunningTime="2025-11-23 14:49:04.397089268 +0000 UTC m=+195.636709152" Nov 23 14:49:04 crc kubenswrapper[4718]: I1123 14:49:04.750434 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qpsmt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Nov 23 14:49:04 crc kubenswrapper[4718]: I1123 14:49:04.750532 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Nov 23 14:49:05 crc kubenswrapper[4718]: I1123 14:49:05.384500 4718 generic.go:334] "Generic (PLEG): container finished" podID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerID="49c65404c950eabfaf34b7dd1c1f8ea41441d42b7b161abfae4f1559d4d6acab" exitCode=0 Nov 23 14:49:05 crc kubenswrapper[4718]: I1123 14:49:05.384613 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" event={"ID":"bf1008cf-6837-4089-ae38-2e44add1cfa5","Type":"ContainerDied","Data":"49c65404c950eabfaf34b7dd1c1f8ea41441d42b7b161abfae4f1559d4d6acab"} Nov 23 14:49:05 crc kubenswrapper[4718]: I1123 14:49:05.388637 4718 generic.go:334] "Generic (PLEG): container finished" podID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerID="a5bda06d5040a2f12185c1b583156100036ca66e6189c8d7ee49f3bab16b3eca" exitCode=0 Nov 23 14:49:05 crc kubenswrapper[4718]: I1123 14:49:05.388713 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgp86" event={"ID":"e55e1134-e6df-4b01-ace8-c84d74fdea73","Type":"ContainerDied","Data":"a5bda06d5040a2f12185c1b583156100036ca66e6189c8d7ee49f3bab16b3eca"} Nov 23 14:49:05 crc kubenswrapper[4718]: I1123 14:49:05.390430 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" event={"ID":"33a0a6b4-7aa0-4718-80f1-2d13fae9e761","Type":"ContainerStarted","Data":"c500c2ec4da3977808287b55dac46999150bce3a35a015c1431f0f20f34a6772"} Nov 23 14:49:05 crc kubenswrapper[4718]: I1123 14:49:05.392879 4718 generic.go:334] "Generic (PLEG): container finished" podID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerID="bab1c5d41f44c3e17e279f48b204a710bd10c19fdfa0dc8707454bd4d6041f9b" exitCode=0 Nov 23 14:49:05 crc kubenswrapper[4718]: I1123 14:49:05.392919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wqs" event={"ID":"80a1d25a-995b-4056-a9e4-cafa5bd6a143","Type":"ContainerDied","Data":"bab1c5d41f44c3e17e279f48b204a710bd10c19fdfa0dc8707454bd4d6041f9b"} Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.403227 4718 generic.go:334] "Generic (PLEG): container finished" podID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerID="fc26e817969dada390cbf52d547a9cd0dd729b33d26932848e98129da4a65c78" exitCode=0 Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.403323 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkrd" event={"ID":"2fc35da2-aa04-4038-aaf9-3483211b1daa","Type":"ContainerDied","Data":"fc26e817969dada390cbf52d547a9cd0dd729b33d26932848e98129da4a65c78"} Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.406095 4718 generic.go:334] "Generic (PLEG): container finished" podID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerID="465d2444c5d1b82a6725e37aa2b4f81131b3623e7ed32308e680b7f44d3556af" exitCode=0 Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.406193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tblh6" event={"ID":"3f4a73b5-96e0-451e-994f-d7351ee3fef3","Type":"ContainerDied","Data":"465d2444c5d1b82a6725e37aa2b4f81131b3623e7ed32308e680b7f44d3556af"} Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.407736 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v55m_a64de119-1bd0-4a8f-87d2-64e130749dc7/registry-server/0.log" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.408698 4718 generic.go:334] "Generic (PLEG): container finished" podID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerID="139218d20643f37a19afd5fa87deb7d39d20de9facdc9d0d925e81f40f782f15" exitCode=1 Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.408930 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v55m" event={"ID":"a64de119-1bd0-4a8f-87d2-64e130749dc7","Type":"ContainerDied","Data":"139218d20643f37a19afd5fa87deb7d39d20de9facdc9d0d925e81f40f782f15"} Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.412039 4718 generic.go:334] "Generic (PLEG): container finished" podID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerID="0447dfe7f086ece00c6c1c5a25c1a9892c922d77e99fed571a0c978bcc7e3080" exitCode=0 Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.412115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hjl5" event={"ID":"b70d270d-3157-42c4-ba7e-b3b2755349a7","Type":"ContainerDied","Data":"0447dfe7f086ece00c6c1c5a25c1a9892c922d77e99fed571a0c978bcc7e3080"} Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.415030 4718 generic.go:334] "Generic (PLEG): container finished" podID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerID="9121d35dcfd8d32e0bcc8a35a6943049e30fbfe87333738475be5c331b53876d" exitCode=0 Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.415598 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjlxq" event={"ID":"f93bf2f1-4f3a-44da-accb-10f3376e58db","Type":"ContainerDied","Data":"9121d35dcfd8d32e0bcc8a35a6943049e30fbfe87333738475be5c331b53876d"} Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.415800 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.420263 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.447055 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rdkgp" podStartSLOduration=4.447030724 podStartE2EDuration="4.447030724s" podCreationTimestamp="2025-11-23 14:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:49:06.43605142 +0000 UTC m=+197.675671264" watchObservedRunningTime="2025-11-23 14:49:06.447030724 +0000 UTC m=+197.686650578" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.483842 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.548151 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6swh4\" (UniqueName: \"kubernetes.io/projected/bf1008cf-6837-4089-ae38-2e44add1cfa5-kube-api-access-6swh4\") pod \"bf1008cf-6837-4089-ae38-2e44add1cfa5\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.548275 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-operator-metrics\") pod \"bf1008cf-6837-4089-ae38-2e44add1cfa5\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.548320 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-trusted-ca\") pod \"bf1008cf-6837-4089-ae38-2e44add1cfa5\" (UID: \"bf1008cf-6837-4089-ae38-2e44add1cfa5\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.553079 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "bf1008cf-6837-4089-ae38-2e44add1cfa5" (UID: "bf1008cf-6837-4089-ae38-2e44add1cfa5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.554086 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1008cf-6837-4089-ae38-2e44add1cfa5-kube-api-access-6swh4" (OuterVolumeSpecName: "kube-api-access-6swh4") pod "bf1008cf-6837-4089-ae38-2e44add1cfa5" (UID: "bf1008cf-6837-4089-ae38-2e44add1cfa5"). InnerVolumeSpecName "kube-api-access-6swh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.554663 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "bf1008cf-6837-4089-ae38-2e44add1cfa5" (UID: "bf1008cf-6837-4089-ae38-2e44add1cfa5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.649947 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.649985 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1008cf-6837-4089-ae38-2e44add1cfa5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.649994 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6swh4\" (UniqueName: \"kubernetes.io/projected/bf1008cf-6837-4089-ae38-2e44add1cfa5-kube-api-access-6swh4\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.742489 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.798242 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.806299 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.852378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rhsc\" (UniqueName: \"kubernetes.io/projected/f93bf2f1-4f3a-44da-accb-10f3376e58db-kube-api-access-2rhsc\") pod \"f93bf2f1-4f3a-44da-accb-10f3376e58db\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.852466 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-utilities\") pod \"e55e1134-e6df-4b01-ace8-c84d74fdea73\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.852511 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-catalog-content\") pod \"e55e1134-e6df-4b01-ace8-c84d74fdea73\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.853238 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-utilities" (OuterVolumeSpecName: "utilities") pod "e55e1134-e6df-4b01-ace8-c84d74fdea73" (UID: "e55e1134-e6df-4b01-ace8-c84d74fdea73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.853367 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-utilities" (OuterVolumeSpecName: "utilities") pod "80a1d25a-995b-4056-a9e4-cafa5bd6a143" (UID: "80a1d25a-995b-4056-a9e4-cafa5bd6a143"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.855407 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93bf2f1-4f3a-44da-accb-10f3376e58db-kube-api-access-2rhsc" (OuterVolumeSpecName: "kube-api-access-2rhsc") pod "f93bf2f1-4f3a-44da-accb-10f3376e58db" (UID: "f93bf2f1-4f3a-44da-accb-10f3376e58db"). InnerVolumeSpecName "kube-api-access-2rhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.852560 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-utilities\") pod \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.855764 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmdwt\" (UniqueName: \"kubernetes.io/projected/e55e1134-e6df-4b01-ace8-c84d74fdea73-kube-api-access-tmdwt\") pod \"e55e1134-e6df-4b01-ace8-c84d74fdea73\" (UID: \"e55e1134-e6df-4b01-ace8-c84d74fdea73\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.855823 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-catalog-content\") pod \"f93bf2f1-4f3a-44da-accb-10f3376e58db\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.855858 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2pcj\" (UniqueName: \"kubernetes.io/projected/80a1d25a-995b-4056-a9e4-cafa5bd6a143-kube-api-access-b2pcj\") pod \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.855909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-utilities\") pod \"f93bf2f1-4f3a-44da-accb-10f3376e58db\" (UID: \"f93bf2f1-4f3a-44da-accb-10f3376e58db\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.855951 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-catalog-content\") pod \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\" (UID: \"80a1d25a-995b-4056-a9e4-cafa5bd6a143\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.856235 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rhsc\" (UniqueName: \"kubernetes.io/projected/f93bf2f1-4f3a-44da-accb-10f3376e58db-kube-api-access-2rhsc\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.856260 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.856272 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.857147 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-utilities" (OuterVolumeSpecName: "utilities") pod "f93bf2f1-4f3a-44da-accb-10f3376e58db" (UID: "f93bf2f1-4f3a-44da-accb-10f3376e58db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.865136 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55e1134-e6df-4b01-ace8-c84d74fdea73-kube-api-access-tmdwt" (OuterVolumeSpecName: "kube-api-access-tmdwt") pod "e55e1134-e6df-4b01-ace8-c84d74fdea73" (UID: "e55e1134-e6df-4b01-ace8-c84d74fdea73"). InnerVolumeSpecName "kube-api-access-tmdwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.865216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a1d25a-995b-4056-a9e4-cafa5bd6a143-kube-api-access-b2pcj" (OuterVolumeSpecName: "kube-api-access-b2pcj") pod "80a1d25a-995b-4056-a9e4-cafa5bd6a143" (UID: "80a1d25a-995b-4056-a9e4-cafa5bd6a143"). InnerVolumeSpecName "kube-api-access-b2pcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.870779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e55e1134-e6df-4b01-ace8-c84d74fdea73" (UID: "e55e1134-e6df-4b01-ace8-c84d74fdea73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.898956 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.901453 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80a1d25a-995b-4056-a9e4-cafa5bd6a143" (UID: "80a1d25a-995b-4056-a9e4-cafa5bd6a143"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.957419 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj2zf\" (UniqueName: \"kubernetes.io/projected/b70d270d-3157-42c4-ba7e-b3b2755349a7-kube-api-access-dj2zf\") pod \"b70d270d-3157-42c4-ba7e-b3b2755349a7\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.957777 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-utilities\") pod \"b70d270d-3157-42c4-ba7e-b3b2755349a7\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.957899 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-catalog-content\") pod \"b70d270d-3157-42c4-ba7e-b3b2755349a7\" (UID: \"b70d270d-3157-42c4-ba7e-b3b2755349a7\") " Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.958282 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.958398 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a1d25a-995b-4056-a9e4-cafa5bd6a143-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.958494 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55e1134-e6df-4b01-ace8-c84d74fdea73-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.958590 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmdwt\" (UniqueName: \"kubernetes.io/projected/e55e1134-e6df-4b01-ace8-c84d74fdea73-kube-api-access-tmdwt\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.958672 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2pcj\" (UniqueName: \"kubernetes.io/projected/80a1d25a-995b-4056-a9e4-cafa5bd6a143-kube-api-access-b2pcj\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.958737 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-utilities" (OuterVolumeSpecName: "utilities") pod "b70d270d-3157-42c4-ba7e-b3b2755349a7" (UID: "b70d270d-3157-42c4-ba7e-b3b2755349a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:06 crc kubenswrapper[4718]: I1123 14:49:06.959849 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70d270d-3157-42c4-ba7e-b3b2755349a7-kube-api-access-dj2zf" (OuterVolumeSpecName: "kube-api-access-dj2zf") pod "b70d270d-3157-42c4-ba7e-b3b2755349a7" (UID: "b70d270d-3157-42c4-ba7e-b3b2755349a7"). InnerVolumeSpecName "kube-api-access-dj2zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.059736 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.059764 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj2zf\" (UniqueName: \"kubernetes.io/projected/b70d270d-3157-42c4-ba7e-b3b2755349a7-kube-api-access-dj2zf\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.385103 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f93bf2f1-4f3a-44da-accb-10f3376e58db" (UID: "f93bf2f1-4f3a-44da-accb-10f3376e58db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.425062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjlxq" event={"ID":"f93bf2f1-4f3a-44da-accb-10f3376e58db","Type":"ContainerDied","Data":"bb5f9e27a2f049e19ebeb731b445f0f0425dce65771bc3d6688ae163ffae0c28"} Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.425132 4718 scope.go:117] "RemoveContainer" containerID="9121d35dcfd8d32e0bcc8a35a6943049e30fbfe87333738475be5c331b53876d" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.425167 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjlxq" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.435924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wqs" event={"ID":"80a1d25a-995b-4056-a9e4-cafa5bd6a143","Type":"ContainerDied","Data":"0c74690e2bb2ad410b133df2c18ac7b753c3dcbe9e37a9873ae387cdb9708e95"} Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.436070 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wqs" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.437604 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" event={"ID":"bf1008cf-6837-4089-ae38-2e44add1cfa5","Type":"ContainerDied","Data":"2cbecd873d72ac4725ceacfeff6020fa58f8449248e3b68623a293cd4db49a45"} Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.437685 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qpsmt" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.467117 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93bf2f1-4f3a-44da-accb-10f3376e58db-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.469877 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgp86" event={"ID":"e55e1134-e6df-4b01-ace8-c84d74fdea73","Type":"ContainerDied","Data":"9cdb76756181e9c61517176c7a813b8663f7d948a8c849c3a85ebda0efae94a9"} Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.470008 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgp86" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.484788 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hjl5" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.484619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hjl5" event={"ID":"b70d270d-3157-42c4-ba7e-b3b2755349a7","Type":"ContainerDied","Data":"533eebe4c902b36eed3275b7685facc8bbf6a526954032ee1e038acca8bb77c3"} Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.490285 4718 scope.go:117] "RemoveContainer" containerID="d113a54cfa74902b6ad40ef6480e62f650a933bcd043801ae82264877342423e" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.501720 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8wqs"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.508700 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k8wqs"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.516343 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjlxq"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.521108 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjlxq"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.539885 4718 scope.go:117] "RemoveContainer" containerID="394cb180f9150b7ea7e00af2ff924c92d52a12d79db788a98447c0d7617ec208" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.543634 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qpsmt"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.552475 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qpsmt"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.563764 4718 scope.go:117] "RemoveContainer" containerID="bab1c5d41f44c3e17e279f48b204a710bd10c19fdfa0dc8707454bd4d6041f9b" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.564510 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgp86"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.568253 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgp86"] Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.585180 4718 scope.go:117] "RemoveContainer" containerID="d7811023c018fd83aa196c417a308b37e0d884c3066f729c23e27c47ac50516d" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.608528 4718 scope.go:117] "RemoveContainer" containerID="3be0ed3d8b82ebd170b96dabf3d35f011f3a8c3275e53b1900539de78d736e67" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.634750 4718 scope.go:117] "RemoveContainer" containerID="49c65404c950eabfaf34b7dd1c1f8ea41441d42b7b161abfae4f1559d4d6acab" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.652727 4718 scope.go:117] "RemoveContainer" containerID="a5bda06d5040a2f12185c1b583156100036ca66e6189c8d7ee49f3bab16b3eca" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.667511 4718 scope.go:117] "RemoveContainer" containerID="ce4a532603123351afb2c8fb63747c7e0f27fffdbc3282c2b000fbd52856061b" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.692487 4718 scope.go:117] "RemoveContainer" containerID="31cd3fd4f2314b4d299f7aa219dae06b68916c17bb2689431b999cbfb13dea72" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.705363 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.708280 4718 scope.go:117] "RemoveContainer" containerID="0447dfe7f086ece00c6c1c5a25c1a9892c922d77e99fed571a0c978bcc7e3080" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.728092 4718 scope.go:117] "RemoveContainer" containerID="04b29e82c0b36d6d70b8722aeba9e6271ee270649cffadd6c9cbbd51779626da" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.749596 4718 scope.go:117] "RemoveContainer" containerID="00d12b2acbdf6266632dd87ae0b0986708e31c2239be92bde847c69eb99f7dbd" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.769043 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-catalog-content\") pod \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.769097 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkvgs\" (UniqueName: \"kubernetes.io/projected/3f4a73b5-96e0-451e-994f-d7351ee3fef3-kube-api-access-gkvgs\") pod \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.769186 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-utilities\") pod \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\" (UID: \"3f4a73b5-96e0-451e-994f-d7351ee3fef3\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.772268 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-utilities" (OuterVolumeSpecName: "utilities") pod "3f4a73b5-96e0-451e-994f-d7351ee3fef3" (UID: "3f4a73b5-96e0-451e-994f-d7351ee3fef3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.773170 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4a73b5-96e0-451e-994f-d7351ee3fef3-kube-api-access-gkvgs" (OuterVolumeSpecName: "kube-api-access-gkvgs") pod "3f4a73b5-96e0-451e-994f-d7351ee3fef3" (UID: "3f4a73b5-96e0-451e-994f-d7351ee3fef3"). InnerVolumeSpecName "kube-api-access-gkvgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.774201 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.838011 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v55m_a64de119-1bd0-4a8f-87d2-64e130749dc7/registry-server/0.log" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.838735 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870300 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-utilities\") pod \"a64de119-1bd0-4a8f-87d2-64e130749dc7\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870353 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rfwj\" (UniqueName: \"kubernetes.io/projected/2fc35da2-aa04-4038-aaf9-3483211b1daa-kube-api-access-7rfwj\") pod \"2fc35da2-aa04-4038-aaf9-3483211b1daa\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-catalog-content\") pod \"2fc35da2-aa04-4038-aaf9-3483211b1daa\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-catalog-content\") pod \"a64de119-1bd0-4a8f-87d2-64e130749dc7\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870475 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pplg\" (UniqueName: \"kubernetes.io/projected/a64de119-1bd0-4a8f-87d2-64e130749dc7-kube-api-access-2pplg\") pod \"a64de119-1bd0-4a8f-87d2-64e130749dc7\" (UID: \"a64de119-1bd0-4a8f-87d2-64e130749dc7\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870502 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-utilities\") pod \"2fc35da2-aa04-4038-aaf9-3483211b1daa\" (UID: \"2fc35da2-aa04-4038-aaf9-3483211b1daa\") " Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870695 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkvgs\" (UniqueName: \"kubernetes.io/projected/3f4a73b5-96e0-451e-994f-d7351ee3fef3-kube-api-access-gkvgs\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.870711 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.871625 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-utilities" (OuterVolumeSpecName: "utilities") pod "2fc35da2-aa04-4038-aaf9-3483211b1daa" (UID: "2fc35da2-aa04-4038-aaf9-3483211b1daa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.872588 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-utilities" (OuterVolumeSpecName: "utilities") pod "a64de119-1bd0-4a8f-87d2-64e130749dc7" (UID: "a64de119-1bd0-4a8f-87d2-64e130749dc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.877010 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64de119-1bd0-4a8f-87d2-64e130749dc7-kube-api-access-2pplg" (OuterVolumeSpecName: "kube-api-access-2pplg") pod "a64de119-1bd0-4a8f-87d2-64e130749dc7" (UID: "a64de119-1bd0-4a8f-87d2-64e130749dc7"). InnerVolumeSpecName "kube-api-access-2pplg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.888414 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc35da2-aa04-4038-aaf9-3483211b1daa-kube-api-access-7rfwj" (OuterVolumeSpecName: "kube-api-access-7rfwj") pod "2fc35da2-aa04-4038-aaf9-3483211b1daa" (UID: "2fc35da2-aa04-4038-aaf9-3483211b1daa"). InnerVolumeSpecName "kube-api-access-7rfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.971629 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.971657 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rfwj\" (UniqueName: \"kubernetes.io/projected/2fc35da2-aa04-4038-aaf9-3483211b1daa-kube-api-access-7rfwj\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.971670 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pplg\" (UniqueName: \"kubernetes.io/projected/a64de119-1bd0-4a8f-87d2-64e130749dc7-kube-api-access-2pplg\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.971678 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:07 crc kubenswrapper[4718]: I1123 14:49:07.986390 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a64de119-1bd0-4a8f-87d2-64e130749dc7" (UID: "a64de119-1bd0-4a8f-87d2-64e130749dc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.073196 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64de119-1bd0-4a8f-87d2-64e130749dc7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.283089 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f4a73b5-96e0-451e-994f-d7351ee3fef3" (UID: "3f4a73b5-96e0-451e-994f-d7351ee3fef3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.377053 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4a73b5-96e0-451e-994f-d7351ee3fef3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.451729 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" path="/var/lib/kubelet/pods/80a1d25a-995b-4056-a9e4-cafa5bd6a143/volumes" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.453001 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" path="/var/lib/kubelet/pods/bf1008cf-6837-4089-ae38-2e44add1cfa5/volumes" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.453963 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" path="/var/lib/kubelet/pods/e55e1134-e6df-4b01-ace8-c84d74fdea73/volumes" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.455930 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" path="/var/lib/kubelet/pods/f93bf2f1-4f3a-44da-accb-10f3376e58db/volumes" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.492582 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tblh6" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.492577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tblh6" event={"ID":"3f4a73b5-96e0-451e-994f-d7351ee3fef3","Type":"ContainerDied","Data":"7f3d8816e36738d564ea92467109b8efb073ea1420d17fd790a04b8b6a1767dc"} Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.492717 4718 scope.go:117] "RemoveContainer" containerID="465d2444c5d1b82a6725e37aa2b4f81131b3623e7ed32308e680b7f44d3556af" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.504337 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v55m_a64de119-1bd0-4a8f-87d2-64e130749dc7/registry-server/0.log" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.506484 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4v55m" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.506518 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v55m" event={"ID":"a64de119-1bd0-4a8f-87d2-64e130749dc7","Type":"ContainerDied","Data":"ee347229747b0ded0b118bb3bb641c73e718fe9a86fd09f7d5b93fcdff093581"} Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.508249 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fc35da2-aa04-4038-aaf9-3483211b1daa" (UID: "2fc35da2-aa04-4038-aaf9-3483211b1daa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.558702 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tblh6"] Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.560031 4718 scope.go:117] "RemoveContainer" containerID="9ac977c37b3673bb0d6d2d29f938182fdfdef34267aeba7e396c21b8f5b8acc0" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.561962 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tblh6"] Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.563813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkrd" event={"ID":"2fc35da2-aa04-4038-aaf9-3483211b1daa","Type":"ContainerDied","Data":"a2a9a6e30675f32a5d2c0a64fa4b62a43b8ef464002c5f9a774ff7383f3d2a1e"} Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.563827 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkrd" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.576669 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4v55m"] Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.579657 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc35da2-aa04-4038-aaf9-3483211b1daa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.580620 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4v55m"] Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.586464 4718 scope.go:117] "RemoveContainer" containerID="01bf59c13259fc34277cec3dde7436d4b0220cad482a57ff844ea52e87496aed" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.608030 4718 scope.go:117] "RemoveContainer" containerID="139218d20643f37a19afd5fa87deb7d39d20de9facdc9d0d925e81f40f782f15" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.622847 4718 scope.go:117] "RemoveContainer" containerID="adb8d1f3a18197ebdc45f28e791fbd0c6fbbb117752305ea18ef00824f362c1d" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.634944 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqkrd"] Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.645883 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hqkrd"] Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.647215 4718 scope.go:117] "RemoveContainer" containerID="d09d70726bb68f7ce4dfad8c82aea5e51db729b1a8ce4f3da661d5550dbf48ad" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.678685 4718 scope.go:117] "RemoveContainer" containerID="fc26e817969dada390cbf52d547a9cd0dd729b33d26932848e98129da4a65c78" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.697204 4718 scope.go:117] "RemoveContainer" containerID="236d3ade230d26ce5528eb3dc8127f484944e842669ff80793f26ea7519c0ef5" Nov 23 14:49:08 crc kubenswrapper[4718]: I1123 14:49:08.713641 4718 scope.go:117] "RemoveContainer" containerID="f330c3fe177f5f8f0f6ad191f729545cee50c5047b622ef6308c3a99984aa6a2" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307370 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pn9qf"] Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307702 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307723 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307742 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307754 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307775 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307789 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307804 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307816 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307831 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307842 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307860 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307872 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307884 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307898 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307914 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307927 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307941 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307953 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.307971 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.307984 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308085 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308098 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308156 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308171 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308190 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308201 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308257 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308274 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308294 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308345 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308362 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308374 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308388 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308433 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308499 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308512 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308527 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308575 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308596 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308609 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308626 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308676 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="extract-content" Nov 23 14:49:09 crc kubenswrapper[4718]: E1123 14:49:09.308695 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308707 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="extract-utilities" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308979 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.308998 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55e1134-e6df-4b01-ace8-c84d74fdea73" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.309035 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a1d25a-995b-4056-a9e4-cafa5bd6a143" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.309052 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.309064 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.309081 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.309098 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93bf2f1-4f3a-44da-accb-10f3376e58db" containerName="registry-server" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.309119 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1008cf-6837-4089-ae38-2e44add1cfa5" containerName="marketplace-operator" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.310289 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.315284 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.335555 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn9qf"] Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.389888 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ce74fe-0bed-4d92-9587-e933c3a3b03c-utilities\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.390037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fss9\" (UniqueName: \"kubernetes.io/projected/13ce74fe-0bed-4d92-9587-e933c3a3b03c-kube-api-access-6fss9\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.390123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ce74fe-0bed-4d92-9587-e933c3a3b03c-catalog-content\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.492599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ce74fe-0bed-4d92-9587-e933c3a3b03c-utilities\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.492752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fss9\" (UniqueName: \"kubernetes.io/projected/13ce74fe-0bed-4d92-9587-e933c3a3b03c-kube-api-access-6fss9\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.492855 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ce74fe-0bed-4d92-9587-e933c3a3b03c-catalog-content\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.494396 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ce74fe-0bed-4d92-9587-e933c3a3b03c-utilities\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.495345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ce74fe-0bed-4d92-9587-e933c3a3b03c-catalog-content\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.507576 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nb5q4"] Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.510232 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.515756 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.537194 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nb5q4"] Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.540592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fss9\" (UniqueName: \"kubernetes.io/projected/13ce74fe-0bed-4d92-9587-e933c3a3b03c-kube-api-access-6fss9\") pod \"redhat-marketplace-pn9qf\" (UID: \"13ce74fe-0bed-4d92-9587-e933c3a3b03c\") " pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.593968 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-utilities\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.594031 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjfr\" (UniqueName: \"kubernetes.io/projected/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-kube-api-access-rgjfr\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.594080 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-catalog-content\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.644636 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.697230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-utilities\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.697311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgjfr\" (UniqueName: \"kubernetes.io/projected/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-kube-api-access-rgjfr\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.697376 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-catalog-content\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.698662 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-catalog-content\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.698960 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-utilities\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.724860 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgjfr\" (UniqueName: \"kubernetes.io/projected/12f7b2fd-ebf7-4c33-8da3-bfd473790e77-kube-api-access-rgjfr\") pod \"certified-operators-nb5q4\" (UID: \"12f7b2fd-ebf7-4c33-8da3-bfd473790e77\") " pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:09 crc kubenswrapper[4718]: I1123 14:49:09.838160 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.151038 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn9qf"] Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.287914 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nb5q4"] Nov 23 14:49:10 crc kubenswrapper[4718]: W1123 14:49:10.296796 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12f7b2fd_ebf7_4c33_8da3_bfd473790e77.slice/crio-f1edf55690e73339e6e8343c931510fdd507e2fd6f354f90fd6d8bf68a61d992 WatchSource:0}: Error finding container f1edf55690e73339e6e8343c931510fdd507e2fd6f354f90fd6d8bf68a61d992: Status 404 returned error can't find the container with id f1edf55690e73339e6e8343c931510fdd507e2fd6f354f90fd6d8bf68a61d992 Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.457064 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc35da2-aa04-4038-aaf9-3483211b1daa" path="/var/lib/kubelet/pods/2fc35da2-aa04-4038-aaf9-3483211b1daa/volumes" Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.458490 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4a73b5-96e0-451e-994f-d7351ee3fef3" path="/var/lib/kubelet/pods/3f4a73b5-96e0-451e-994f-d7351ee3fef3/volumes" Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.459924 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64de119-1bd0-4a8f-87d2-64e130749dc7" path="/var/lib/kubelet/pods/a64de119-1bd0-4a8f-87d2-64e130749dc7/volumes" Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.584098 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb5q4" event={"ID":"12f7b2fd-ebf7-4c33-8da3-bfd473790e77","Type":"ContainerStarted","Data":"f1edf55690e73339e6e8343c931510fdd507e2fd6f354f90fd6d8bf68a61d992"} Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.586510 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn9qf" event={"ID":"13ce74fe-0bed-4d92-9587-e933c3a3b03c","Type":"ContainerStarted","Data":"9311c2c40af5f43cfa0df09500813cca6b2693ffa49c3029234036bc762d471c"} Nov 23 14:49:10 crc kubenswrapper[4718]: I1123 14:49:10.586572 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn9qf" event={"ID":"13ce74fe-0bed-4d92-9587-e933c3a3b03c","Type":"ContainerStarted","Data":"a3de2940d7b1994aef633880a5868be8a226aab89b6d5448ae67314954891fa5"} Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.327182 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b70d270d-3157-42c4-ba7e-b3b2755349a7" (UID: "b70d270d-3157-42c4-ba7e-b3b2755349a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.419154 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70d270d-3157-42c4-ba7e-b3b2755349a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.426831 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hjl5"] Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.433382 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4hjl5"] Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.600697 4718 generic.go:334] "Generic (PLEG): container finished" podID="12f7b2fd-ebf7-4c33-8da3-bfd473790e77" containerID="c0b58f3d4e340afdc9a05c1bf47622d131756138a0f8d57d7ef8b1122472fb0f" exitCode=0 Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.600846 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb5q4" event={"ID":"12f7b2fd-ebf7-4c33-8da3-bfd473790e77","Type":"ContainerDied","Data":"c0b58f3d4e340afdc9a05c1bf47622d131756138a0f8d57d7ef8b1122472fb0f"} Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.603805 4718 generic.go:334] "Generic (PLEG): container finished" podID="13ce74fe-0bed-4d92-9587-e933c3a3b03c" containerID="9311c2c40af5f43cfa0df09500813cca6b2693ffa49c3029234036bc762d471c" exitCode=0 Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.603844 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn9qf" event={"ID":"13ce74fe-0bed-4d92-9587-e933c3a3b03c","Type":"ContainerDied","Data":"9311c2c40af5f43cfa0df09500813cca6b2693ffa49c3029234036bc762d471c"} Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.906631 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wmpb"] Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.919308 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.922746 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.922828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-utilities\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.922874 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-catalog-content\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.922969 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bjl\" (UniqueName: \"kubernetes.io/projected/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-kube-api-access-l5bjl\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:11 crc kubenswrapper[4718]: I1123 14:49:11.948594 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wmpb"] Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.023688 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-catalog-content\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.023825 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5bjl\" (UniqueName: \"kubernetes.io/projected/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-kube-api-access-l5bjl\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.023852 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-utilities\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.024313 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-catalog-content\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.024383 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-utilities\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.043677 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5bjl\" (UniqueName: \"kubernetes.io/projected/7dffe07d-8aa8-46b3-a5a5-28d8152d6df3-kube-api-access-l5bjl\") pod \"community-operators-7wmpb\" (UID: \"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3\") " pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.252300 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.450223 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70d270d-3157-42c4-ba7e-b3b2755349a7" path="/var/lib/kubelet/pods/b70d270d-3157-42c4-ba7e-b3b2755349a7/volumes" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.572985 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.624998 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb5q4" event={"ID":"12f7b2fd-ebf7-4c33-8da3-bfd473790e77","Type":"ContainerStarted","Data":"62529ec7a921c106be4d622d073ec47be8602b3027cc427ff771f4d8b1533785"} Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.627937 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn9qf" event={"ID":"13ce74fe-0bed-4d92-9587-e933c3a3b03c","Type":"ContainerStarted","Data":"9ec2ebb46956047738e8c4770acd6695febead503521b286c97aecb4eedd545a"} Nov 23 14:49:12 crc kubenswrapper[4718]: I1123 14:49:12.715375 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wmpb"] Nov 23 14:49:13 crc kubenswrapper[4718]: I1123 14:49:13.639320 4718 generic.go:334] "Generic (PLEG): container finished" podID="12f7b2fd-ebf7-4c33-8da3-bfd473790e77" containerID="62529ec7a921c106be4d622d073ec47be8602b3027cc427ff771f4d8b1533785" exitCode=0 Nov 23 14:49:13 crc kubenswrapper[4718]: I1123 14:49:13.639412 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb5q4" event={"ID":"12f7b2fd-ebf7-4c33-8da3-bfd473790e77","Type":"ContainerDied","Data":"62529ec7a921c106be4d622d073ec47be8602b3027cc427ff771f4d8b1533785"} Nov 23 14:49:13 crc kubenswrapper[4718]: I1123 14:49:13.646060 4718 generic.go:334] "Generic (PLEG): container finished" podID="13ce74fe-0bed-4d92-9587-e933c3a3b03c" containerID="9ec2ebb46956047738e8c4770acd6695febead503521b286c97aecb4eedd545a" exitCode=0 Nov 23 14:49:13 crc kubenswrapper[4718]: I1123 14:49:13.646534 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn9qf" event={"ID":"13ce74fe-0bed-4d92-9587-e933c3a3b03c","Type":"ContainerDied","Data":"9ec2ebb46956047738e8c4770acd6695febead503521b286c97aecb4eedd545a"} Nov 23 14:49:13 crc kubenswrapper[4718]: I1123 14:49:13.649691 4718 generic.go:334] "Generic (PLEG): container finished" podID="7dffe07d-8aa8-46b3-a5a5-28d8152d6df3" containerID="59a52f30f3be50cfffe033e0c79c3fb7840ea02e03253f7b2954e2f7940a0c61" exitCode=0 Nov 23 14:49:13 crc kubenswrapper[4718]: I1123 14:49:13.649716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wmpb" event={"ID":"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3","Type":"ContainerDied","Data":"59a52f30f3be50cfffe033e0c79c3fb7840ea02e03253f7b2954e2f7940a0c61"} Nov 23 14:49:13 crc kubenswrapper[4718]: I1123 14:49:13.649735 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wmpb" event={"ID":"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3","Type":"ContainerStarted","Data":"4eafa805f9523425ab95dfb03e237ee3c52bb0f3db4cbcb658f1bdfc75a47ff4"} Nov 23 14:49:14 crc kubenswrapper[4718]: I1123 14:49:14.657055 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wmpb" event={"ID":"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3","Type":"ContainerStarted","Data":"f57327976765b3e892e4aa60cfe5392be1277ff46d1a4904c37c15574dbf3b6d"} Nov 23 14:49:14 crc kubenswrapper[4718]: I1123 14:49:14.661017 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb5q4" event={"ID":"12f7b2fd-ebf7-4c33-8da3-bfd473790e77","Type":"ContainerStarted","Data":"567e9806840ecafcfd7d0eed4a82a733dab3e238d11cba7edb391876f9916475"} Nov 23 14:49:14 crc kubenswrapper[4718]: I1123 14:49:14.663268 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn9qf" event={"ID":"13ce74fe-0bed-4d92-9587-e933c3a3b03c","Type":"ContainerStarted","Data":"8201a7577689930986ce6d5c63b9494b6c344cc9a035b8400402c5f489ef8404"} Nov 23 14:49:14 crc kubenswrapper[4718]: I1123 14:49:14.697819 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pn9qf" podStartSLOduration=3.200246483 podStartE2EDuration="5.697800899s" podCreationTimestamp="2025-11-23 14:49:09 +0000 UTC" firstStartedPulling="2025-11-23 14:49:11.606008956 +0000 UTC m=+202.845628840" lastFinishedPulling="2025-11-23 14:49:14.103563412 +0000 UTC m=+205.343183256" observedRunningTime="2025-11-23 14:49:14.695097272 +0000 UTC m=+205.934717116" watchObservedRunningTime="2025-11-23 14:49:14.697800899 +0000 UTC m=+205.937420743" Nov 23 14:49:14 crc kubenswrapper[4718]: I1123 14:49:14.719954 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nb5q4" podStartSLOduration=3.244128598 podStartE2EDuration="5.719932142s" podCreationTimestamp="2025-11-23 14:49:09 +0000 UTC" firstStartedPulling="2025-11-23 14:49:11.602868116 +0000 UTC m=+202.842488000" lastFinishedPulling="2025-11-23 14:49:14.07867165 +0000 UTC m=+205.318291544" observedRunningTime="2025-11-23 14:49:14.715214517 +0000 UTC m=+205.954834361" watchObservedRunningTime="2025-11-23 14:49:14.719932142 +0000 UTC m=+205.959552016" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.670941 4718 generic.go:334] "Generic (PLEG): container finished" podID="7dffe07d-8aa8-46b3-a5a5-28d8152d6df3" containerID="f57327976765b3e892e4aa60cfe5392be1277ff46d1a4904c37c15574dbf3b6d" exitCode=0 Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.671020 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wmpb" event={"ID":"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3","Type":"ContainerDied","Data":"f57327976765b3e892e4aa60cfe5392be1277ff46d1a4904c37c15574dbf3b6d"} Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.696224 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wjtk6"] Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.697376 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.699550 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.716346 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjtk6"] Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.875948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81743302-97b1-40c2-953f-070a5b775a74-catalog-content\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.876018 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81743302-97b1-40c2-953f-070a5b775a74-utilities\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.876172 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crs4w\" (UniqueName: \"kubernetes.io/projected/81743302-97b1-40c2-953f-070a5b775a74-kube-api-access-crs4w\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.976827 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81743302-97b1-40c2-953f-070a5b775a74-catalog-content\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.976983 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81743302-97b1-40c2-953f-070a5b775a74-utilities\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.977103 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crs4w\" (UniqueName: \"kubernetes.io/projected/81743302-97b1-40c2-953f-070a5b775a74-kube-api-access-crs4w\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.977306 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81743302-97b1-40c2-953f-070a5b775a74-catalog-content\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:15 crc kubenswrapper[4718]: I1123 14:49:15.977361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81743302-97b1-40c2-953f-070a5b775a74-utilities\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.005703 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crs4w\" (UniqueName: \"kubernetes.io/projected/81743302-97b1-40c2-953f-070a5b775a74-kube-api-access-crs4w\") pod \"redhat-operators-wjtk6\" (UID: \"81743302-97b1-40c2-953f-070a5b775a74\") " pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.009980 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.432816 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjtk6"] Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.678646 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wmpb" event={"ID":"7dffe07d-8aa8-46b3-a5a5-28d8152d6df3","Type":"ContainerStarted","Data":"02572fe51b201b18ade5eafaf0f9efe549bbcf4f859d0c1ef50ec8d0c7af748f"} Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.680525 4718 generic.go:334] "Generic (PLEG): container finished" podID="81743302-97b1-40c2-953f-070a5b775a74" containerID="66775f6812fdf7f793392c68f2d620090b7f8637935db19d49aac9895c10c77e" exitCode=0 Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.680566 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjtk6" event={"ID":"81743302-97b1-40c2-953f-070a5b775a74","Type":"ContainerDied","Data":"66775f6812fdf7f793392c68f2d620090b7f8637935db19d49aac9895c10c77e"} Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.680590 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjtk6" event={"ID":"81743302-97b1-40c2-953f-070a5b775a74","Type":"ContainerStarted","Data":"f9e36f1366de07623307b11e254e713387ca96fcc774465721a4c96a73088146"} Nov 23 14:49:16 crc kubenswrapper[4718]: I1123 14:49:16.710685 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wmpb" podStartSLOduration=3.2663368 podStartE2EDuration="5.710654323s" podCreationTimestamp="2025-11-23 14:49:11 +0000 UTC" firstStartedPulling="2025-11-23 14:49:13.651023662 +0000 UTC m=+204.890643556" lastFinishedPulling="2025-11-23 14:49:16.095341235 +0000 UTC m=+207.334961079" observedRunningTime="2025-11-23 14:49:16.706166675 +0000 UTC m=+207.945786599" watchObservedRunningTime="2025-11-23 14:49:16.710654323 +0000 UTC m=+207.950274207" Nov 23 14:49:17 crc kubenswrapper[4718]: I1123 14:49:17.686556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjtk6" event={"ID":"81743302-97b1-40c2-953f-070a5b775a74","Type":"ContainerStarted","Data":"541161d337536d5e3601ea0669d0ceac8dcca1b35a18adebf70114127020a62b"} Nov 23 14:49:18 crc kubenswrapper[4718]: I1123 14:49:18.694023 4718 generic.go:334] "Generic (PLEG): container finished" podID="81743302-97b1-40c2-953f-070a5b775a74" containerID="541161d337536d5e3601ea0669d0ceac8dcca1b35a18adebf70114127020a62b" exitCode=0 Nov 23 14:49:18 crc kubenswrapper[4718]: I1123 14:49:18.694115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjtk6" event={"ID":"81743302-97b1-40c2-953f-070a5b775a74","Type":"ContainerDied","Data":"541161d337536d5e3601ea0669d0ceac8dcca1b35a18adebf70114127020a62b"} Nov 23 14:49:19 crc kubenswrapper[4718]: I1123 14:49:19.645391 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:19 crc kubenswrapper[4718]: I1123 14:49:19.645940 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:19 crc kubenswrapper[4718]: I1123 14:49:19.717815 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:19 crc kubenswrapper[4718]: I1123 14:49:19.764033 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pn9qf" Nov 23 14:49:19 crc kubenswrapper[4718]: I1123 14:49:19.838921 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:19 crc kubenswrapper[4718]: I1123 14:49:19.839310 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:19 crc kubenswrapper[4718]: I1123 14:49:19.893326 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:20 crc kubenswrapper[4718]: I1123 14:49:20.707729 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjtk6" event={"ID":"81743302-97b1-40c2-953f-070a5b775a74","Type":"ContainerStarted","Data":"0a55aca34afb59c6e855755747242408a615e530a3603606349879eefb084272"} Nov 23 14:49:20 crc kubenswrapper[4718]: I1123 14:49:20.736150 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wjtk6" podStartSLOduration=2.9846388619999997 podStartE2EDuration="5.736131106s" podCreationTimestamp="2025-11-23 14:49:15 +0000 UTC" firstStartedPulling="2025-11-23 14:49:16.682273271 +0000 UTC m=+207.921893135" lastFinishedPulling="2025-11-23 14:49:19.433765535 +0000 UTC m=+210.673385379" observedRunningTime="2025-11-23 14:49:20.732925955 +0000 UTC m=+211.972545859" watchObservedRunningTime="2025-11-23 14:49:20.736131106 +0000 UTC m=+211.975750950" Nov 23 14:49:20 crc kubenswrapper[4718]: I1123 14:49:20.776368 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nb5q4" Nov 23 14:49:22 crc kubenswrapper[4718]: I1123 14:49:22.252859 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:22 crc kubenswrapper[4718]: I1123 14:49:22.253110 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:22 crc kubenswrapper[4718]: I1123 14:49:22.291312 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:22 crc kubenswrapper[4718]: I1123 14:49:22.766377 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wmpb" Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.055418 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.055649 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.055715 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.056432 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.056538 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e" gracePeriod=600 Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.741197 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e" exitCode=0 Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.741299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e"} Nov 23 14:49:23 crc kubenswrapper[4718]: I1123 14:49:23.741556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"ce79aba3ac30f4f8a186f32435648dbf5a20f89ecec480680ab901eafede0c18"} Nov 23 14:49:26 crc kubenswrapper[4718]: I1123 14:49:26.010695 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:26 crc kubenswrapper[4718]: I1123 14:49:26.011132 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:26 crc kubenswrapper[4718]: I1123 14:49:26.067017 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:26 crc kubenswrapper[4718]: I1123 14:49:26.762396 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" podUID="fa4bc064-a334-47bd-820e-00ced1c89025" containerName="oauth-openshift" containerID="cri-o://a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c" gracePeriod=15 Nov 23 14:49:26 crc kubenswrapper[4718]: I1123 14:49:26.819158 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wjtk6" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.159234 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.194712 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-b97999dd9-bvjn2"] Nov 23 14:49:27 crc kubenswrapper[4718]: E1123 14:49:27.194941 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4bc064-a334-47bd-820e-00ced1c89025" containerName="oauth-openshift" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.194954 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4bc064-a334-47bd-820e-00ced1c89025" containerName="oauth-openshift" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.195070 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4bc064-a334-47bd-820e-00ced1c89025" containerName="oauth-openshift" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.195498 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.207758 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b97999dd9-bvjn2"] Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342724 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bc064-a334-47bd-820e-00ced1c89025-audit-dir\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342775 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-provider-selection\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342808 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-router-certs\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342838 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-cliconfig\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342858 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-login\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342881 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-serving-cert\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342912 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-session\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342939 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2svw\" (UniqueName: \"kubernetes.io/projected/fa4bc064-a334-47bd-820e-00ced1c89025-kube-api-access-v2svw\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342961 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-audit-policies\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.342987 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-trusted-ca-bundle\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343035 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-error\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343056 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-ocp-branding-template\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343080 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-service-ca\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343108 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-idp-0-file-data\") pod \"fa4bc064-a334-47bd-820e-00ced1c89025\" (UID: \"fa4bc064-a334-47bd-820e-00ced1c89025\") " Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343181 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-audit-dir\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343212 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-audit-policies\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343237 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-error\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343273 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-session\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343296 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343321 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-service-ca\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343346 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343385 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343407 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-login\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343432 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343488 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343516 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvsv\" (UniqueName: \"kubernetes.io/projected/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-kube-api-access-5pvsv\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343541 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-router-certs\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343570 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.343719 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4bc064-a334-47bd-820e-00ced1c89025-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.344403 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.344707 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.344726 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.344794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.349318 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.349487 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.349660 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.349951 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.350679 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.351040 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.351349 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4bc064-a334-47bd-820e-00ced1c89025-kube-api-access-v2svw" (OuterVolumeSpecName: "kube-api-access-v2svw") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "kube-api-access-v2svw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.351836 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.353170 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fa4bc064-a334-47bd-820e-00ced1c89025" (UID: "fa4bc064-a334-47bd-820e-00ced1c89025"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444552 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444604 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-audit-dir\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444625 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-audit-policies\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444647 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-error\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444667 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-session\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444704 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-service-ca\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444724 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444717 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-audit-dir\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444755 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444771 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-login\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444792 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444820 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444838 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pvsv\" (UniqueName: \"kubernetes.io/projected/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-kube-api-access-5pvsv\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444858 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-router-certs\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444894 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444905 4718 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bc064-a334-47bd-820e-00ced1c89025-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444918 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444928 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444937 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444946 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444955 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444964 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444973 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2svw\" (UniqueName: \"kubernetes.io/projected/fa4bc064-a334-47bd-820e-00ced1c89025-kube-api-access-v2svw\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444982 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.444990 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.445001 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.445010 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.445019 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa4bc064-a334-47bd-820e-00ced1c89025-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.445471 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-audit-policies\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.445639 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-service-ca\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.446090 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.446270 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.448845 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-router-certs\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.448935 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.449620 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-error\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.450227 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-session\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.451741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.452423 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.454529 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.455890 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-v4-0-config-user-template-login\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.462336 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pvsv\" (UniqueName: \"kubernetes.io/projected/d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c-kube-api-access-5pvsv\") pod \"oauth-openshift-b97999dd9-bvjn2\" (UID: \"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c\") " pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.520490 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.763561 4718 generic.go:334] "Generic (PLEG): container finished" podID="fa4bc064-a334-47bd-820e-00ced1c89025" containerID="a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c" exitCode=0 Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.763614 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.763644 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" event={"ID":"fa4bc064-a334-47bd-820e-00ced1c89025","Type":"ContainerDied","Data":"a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c"} Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.764052 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hnvnt" event={"ID":"fa4bc064-a334-47bd-820e-00ced1c89025","Type":"ContainerDied","Data":"5fc14c1ec8d617a1f2ea56d94ff9aa9625ecf0987b7a04a91528ab659d028f72"} Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.764072 4718 scope.go:117] "RemoveContainer" containerID="a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.790665 4718 scope.go:117] "RemoveContainer" containerID="a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c" Nov 23 14:49:27 crc kubenswrapper[4718]: E1123 14:49:27.791575 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c\": container with ID starting with a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c not found: ID does not exist" containerID="a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.791644 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c"} err="failed to get container status \"a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c\": rpc error: code = NotFound desc = could not find container \"a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c\": container with ID starting with a8b1cfc61b4cd42c65dcf17416e7823d612ec06ee754022dd594a8955b1fb86c not found: ID does not exist" Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.801706 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hnvnt"] Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.804608 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hnvnt"] Nov 23 14:49:27 crc kubenswrapper[4718]: I1123 14:49:27.951320 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b97999dd9-bvjn2"] Nov 23 14:49:28 crc kubenswrapper[4718]: I1123 14:49:28.448058 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4bc064-a334-47bd-820e-00ced1c89025" path="/var/lib/kubelet/pods/fa4bc064-a334-47bd-820e-00ced1c89025/volumes" Nov 23 14:49:28 crc kubenswrapper[4718]: I1123 14:49:28.772014 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" event={"ID":"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c","Type":"ContainerStarted","Data":"bd036240d3ded2cb8c0159e4051a2d773aeba9c5fb43afee546c92c0b988bfd5"} Nov 23 14:49:28 crc kubenswrapper[4718]: I1123 14:49:28.772053 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" event={"ID":"d80c9aaa-38bc-4f65-9cda-3dc6c4eb4a6c","Type":"ContainerStarted","Data":"03e71466f8683fd8766d450edb60565d9ba28d8c2801da1fe8b1de51cf12720a"} Nov 23 14:49:29 crc kubenswrapper[4718]: I1123 14:49:29.777109 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:29 crc kubenswrapper[4718]: I1123 14:49:29.783002 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" Nov 23 14:49:29 crc kubenswrapper[4718]: I1123 14:49:29.802387 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-b97999dd9-bvjn2" podStartSLOduration=28.802370698 podStartE2EDuration="28.802370698s" podCreationTimestamp="2025-11-23 14:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:49:29.799875408 +0000 UTC m=+221.039495292" watchObservedRunningTime="2025-11-23 14:49:29.802370698 +0000 UTC m=+221.041990542" Nov 23 14:50:49 crc kubenswrapper[4718]: I1123 14:50:49.946011 4718 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 23 14:51:23 crc kubenswrapper[4718]: I1123 14:51:23.053282 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:51:23 crc kubenswrapper[4718]: I1123 14:51:23.053856 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:51:53 crc kubenswrapper[4718]: I1123 14:51:53.052835 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:51:53 crc kubenswrapper[4718]: I1123 14:51:53.053372 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.052723 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.053240 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.053288 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.053851 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce79aba3ac30f4f8a186f32435648dbf5a20f89ecec480680ab901eafede0c18"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.053979 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://ce79aba3ac30f4f8a186f32435648dbf5a20f89ecec480680ab901eafede0c18" gracePeriod=600 Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.882321 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="ce79aba3ac30f4f8a186f32435648dbf5a20f89ecec480680ab901eafede0c18" exitCode=0 Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.882464 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"ce79aba3ac30f4f8a186f32435648dbf5a20f89ecec480680ab901eafede0c18"} Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.882704 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"e69f70c6d8e0cb888bc50f00b9d126d21fc061450a3827ee8309102066c2eb2c"} Nov 23 14:52:23 crc kubenswrapper[4718]: I1123 14:52:23.882736 4718 scope.go:117] "RemoveContainer" containerID="c90c18a238c5666966df422f2cce16e863ed60468ed3d9292ebc58ecd35b2c1e" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.097784 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cf7sv"] Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.098978 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.117113 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cf7sv"] Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213350 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213467 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213505 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-registry-certificates\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213567 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-registry-tls\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213636 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-bound-sa-token\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213834 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-trusted-ca\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213870 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.213987 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9r9\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-kube-api-access-mx9r9\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.237778 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.315633 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-bound-sa-token\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.315715 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-trusted-ca\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.315738 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.315762 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9r9\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-kube-api-access-mx9r9\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.315787 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.315805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-registry-certificates\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.315824 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-registry-tls\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.316781 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.317511 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-registry-certificates\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.317689 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-trusted-ca\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.322602 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.323375 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-registry-tls\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.331358 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-bound-sa-token\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.332795 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9r9\" (UniqueName: \"kubernetes.io/projected/1a5a17d6-d7b3-4d0b-9e36-f677746cf198-kube-api-access-mx9r9\") pod \"image-registry-66df7c8f76-cf7sv\" (UID: \"1a5a17d6-d7b3-4d0b-9e36-f677746cf198\") " pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.414391 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:26 crc kubenswrapper[4718]: I1123 14:53:26.581242 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cf7sv"] Nov 23 14:53:27 crc kubenswrapper[4718]: I1123 14:53:27.232869 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" event={"ID":"1a5a17d6-d7b3-4d0b-9e36-f677746cf198","Type":"ContainerStarted","Data":"7cc6e699dd8041b5604f8872edecf53bdd15d4e1b88d61c065deb8c4d01dfa8b"} Nov 23 14:53:27 crc kubenswrapper[4718]: I1123 14:53:27.234950 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" event={"ID":"1a5a17d6-d7b3-4d0b-9e36-f677746cf198","Type":"ContainerStarted","Data":"4bfe3b294603f080b15e93e010340562ef2b7246417a2724b3097ea3822a9874"} Nov 23 14:53:27 crc kubenswrapper[4718]: I1123 14:53:27.235159 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:27 crc kubenswrapper[4718]: I1123 14:53:27.258640 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" podStartSLOduration=1.258609659 podStartE2EDuration="1.258609659s" podCreationTimestamp="2025-11-23 14:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:53:27.254719134 +0000 UTC m=+458.494339008" watchObservedRunningTime="2025-11-23 14:53:27.258609659 +0000 UTC m=+458.498229593" Nov 23 14:53:46 crc kubenswrapper[4718]: I1123 14:53:46.420306 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cf7sv" Nov 23 14:53:46 crc kubenswrapper[4718]: I1123 14:53:46.490571 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tsbrh"] Nov 23 14:54:11 crc kubenswrapper[4718]: I1123 14:54:11.532478 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" podUID="108ad2a6-0176-40d5-9252-577047cea58d" containerName="registry" containerID="cri-o://5bd438efdef5a0ce0c86854ef5284103ba2989a0ba6566153b37608f11cd73ca" gracePeriod=30 Nov 23 14:54:12 crc kubenswrapper[4718]: I1123 14:54:12.313577 4718 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-tsbrh container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.10:5000/healthz\": dial tcp 10.217.0.10:5000: connect: connection refused" start-of-body= Nov 23 14:54:12 crc kubenswrapper[4718]: I1123 14:54:12.313694 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" podUID="108ad2a6-0176-40d5-9252-577047cea58d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.10:5000/healthz\": dial tcp 10.217.0.10:5000: connect: connection refused" Nov 23 14:54:12 crc kubenswrapper[4718]: I1123 14:54:12.512678 4718 generic.go:334] "Generic (PLEG): container finished" podID="108ad2a6-0176-40d5-9252-577047cea58d" containerID="5bd438efdef5a0ce0c86854ef5284103ba2989a0ba6566153b37608f11cd73ca" exitCode=0 Nov 23 14:54:12 crc kubenswrapper[4718]: I1123 14:54:12.512761 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" event={"ID":"108ad2a6-0176-40d5-9252-577047cea58d","Type":"ContainerDied","Data":"5bd438efdef5a0ce0c86854ef5284103ba2989a0ba6566153b37608f11cd73ca"} Nov 23 14:54:12 crc kubenswrapper[4718]: I1123 14:54:12.990173 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.152402 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-bound-sa-token\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.152597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/108ad2a6-0176-40d5-9252-577047cea58d-installation-pull-secrets\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.152681 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-registry-certificates\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.152972 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.153031 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-registry-tls\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.153112 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/108ad2a6-0176-40d5-9252-577047cea58d-ca-trust-extracted\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.153158 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-trusted-ca\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.153209 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6xh9\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-kube-api-access-f6xh9\") pod \"108ad2a6-0176-40d5-9252-577047cea58d\" (UID: \"108ad2a6-0176-40d5-9252-577047cea58d\") " Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.154342 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.154691 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.161386 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108ad2a6-0176-40d5-9252-577047cea58d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.162741 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.167008 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.167657 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-kube-api-access-f6xh9" (OuterVolumeSpecName: "kube-api-access-f6xh9") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "kube-api-access-f6xh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.171277 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108ad2a6-0176-40d5-9252-577047cea58d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.173924 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "108ad2a6-0176-40d5-9252-577047cea58d" (UID: "108ad2a6-0176-40d5-9252-577047cea58d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.255499 4718 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.255534 4718 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.255566 4718 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/108ad2a6-0176-40d5-9252-577047cea58d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.255574 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/108ad2a6-0176-40d5-9252-577047cea58d-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.255583 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6xh9\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-kube-api-access-f6xh9\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.255590 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/108ad2a6-0176-40d5-9252-577047cea58d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.255599 4718 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/108ad2a6-0176-40d5-9252-577047cea58d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.519418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" event={"ID":"108ad2a6-0176-40d5-9252-577047cea58d","Type":"ContainerDied","Data":"c997546a855a639a2314ecea956ee30ef1003a24a440f44df413218a88770ab2"} Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.519483 4718 scope.go:117] "RemoveContainer" containerID="5bd438efdef5a0ce0c86854ef5284103ba2989a0ba6566153b37608f11cd73ca" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.519548 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tsbrh" Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.561146 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tsbrh"] Nov 23 14:54:13 crc kubenswrapper[4718]: I1123 14:54:13.564063 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tsbrh"] Nov 23 14:54:14 crc kubenswrapper[4718]: I1123 14:54:14.450273 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108ad2a6-0176-40d5-9252-577047cea58d" path="/var/lib/kubelet/pods/108ad2a6-0176-40d5-9252-577047cea58d/volumes" Nov 23 14:54:23 crc kubenswrapper[4718]: I1123 14:54:23.052542 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:54:23 crc kubenswrapper[4718]: I1123 14:54:23.053025 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.935391 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dlx5n"] Nov 23 14:54:30 crc kubenswrapper[4718]: E1123 14:54:30.936177 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108ad2a6-0176-40d5-9252-577047cea58d" containerName="registry" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.936192 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="108ad2a6-0176-40d5-9252-577047cea58d" containerName="registry" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.936326 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="108ad2a6-0176-40d5-9252-577047cea58d" containerName="registry" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.936782 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.940484 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hcrfz" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.940810 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.942958 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.961097 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g6g5c"] Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.962352 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-g6g5c" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.965545 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t8n6c" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.975350 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wtn4p"] Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.976531 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.978619 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mvbnx" Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.984602 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g6g5c"] Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.989534 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wtn4p"] Nov 23 14:54:30 crc kubenswrapper[4718]: I1123 14:54:30.994813 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dlx5n"] Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.087080 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299ht\" (UniqueName: \"kubernetes.io/projected/006f97d3-c32d-4175-b6d1-41f25d854d69-kube-api-access-299ht\") pod \"cert-manager-webhook-5655c58dd6-wtn4p\" (UID: \"006f97d3-c32d-4175-b6d1-41f25d854d69\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.087338 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfld\" (UniqueName: \"kubernetes.io/projected/9c7e4ce6-8467-4656-9451-4ca2cf5f05e3-kube-api-access-9zfld\") pod \"cert-manager-5b446d88c5-g6g5c\" (UID: \"9c7e4ce6-8467-4656-9451-4ca2cf5f05e3\") " pod="cert-manager/cert-manager-5b446d88c5-g6g5c" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.087477 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml4q8\" (UniqueName: \"kubernetes.io/projected/e6b032f0-b0b8-4db8-af64-ac70e535c9e7-kube-api-access-ml4q8\") pod \"cert-manager-cainjector-7f985d654d-dlx5n\" (UID: \"e6b032f0-b0b8-4db8-af64-ac70e535c9e7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.189084 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299ht\" (UniqueName: \"kubernetes.io/projected/006f97d3-c32d-4175-b6d1-41f25d854d69-kube-api-access-299ht\") pod \"cert-manager-webhook-5655c58dd6-wtn4p\" (UID: \"006f97d3-c32d-4175-b6d1-41f25d854d69\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.189325 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zfld\" (UniqueName: \"kubernetes.io/projected/9c7e4ce6-8467-4656-9451-4ca2cf5f05e3-kube-api-access-9zfld\") pod \"cert-manager-5b446d88c5-g6g5c\" (UID: \"9c7e4ce6-8467-4656-9451-4ca2cf5f05e3\") " pod="cert-manager/cert-manager-5b446d88c5-g6g5c" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.189501 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml4q8\" (UniqueName: \"kubernetes.io/projected/e6b032f0-b0b8-4db8-af64-ac70e535c9e7-kube-api-access-ml4q8\") pod \"cert-manager-cainjector-7f985d654d-dlx5n\" (UID: \"e6b032f0-b0b8-4db8-af64-ac70e535c9e7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.209244 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml4q8\" (UniqueName: \"kubernetes.io/projected/e6b032f0-b0b8-4db8-af64-ac70e535c9e7-kube-api-access-ml4q8\") pod \"cert-manager-cainjector-7f985d654d-dlx5n\" (UID: \"e6b032f0-b0b8-4db8-af64-ac70e535c9e7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.210658 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zfld\" (UniqueName: \"kubernetes.io/projected/9c7e4ce6-8467-4656-9451-4ca2cf5f05e3-kube-api-access-9zfld\") pod \"cert-manager-5b446d88c5-g6g5c\" (UID: \"9c7e4ce6-8467-4656-9451-4ca2cf5f05e3\") " pod="cert-manager/cert-manager-5b446d88c5-g6g5c" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.220214 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299ht\" (UniqueName: \"kubernetes.io/projected/006f97d3-c32d-4175-b6d1-41f25d854d69-kube-api-access-299ht\") pod \"cert-manager-webhook-5655c58dd6-wtn4p\" (UID: \"006f97d3-c32d-4175-b6d1-41f25d854d69\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.258234 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.278744 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-g6g5c" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.294918 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.463311 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-dlx5n"] Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.475698 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.628747 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" event={"ID":"e6b032f0-b0b8-4db8-af64-ac70e535c9e7","Type":"ContainerStarted","Data":"a21f3b062a3682777a7b07b4de2432aabe920cca5adb3db83d6d6361af212228"} Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.708640 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-g6g5c"] Nov 23 14:54:31 crc kubenswrapper[4718]: I1123 14:54:31.711914 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wtn4p"] Nov 23 14:54:31 crc kubenswrapper[4718]: W1123 14:54:31.713473 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod006f97d3_c32d_4175_b6d1_41f25d854d69.slice/crio-1713e13a393b82613cab71e6e5805e3ecd4736d3fbebdcb0203d9d6c5b59ff4b WatchSource:0}: Error finding container 1713e13a393b82613cab71e6e5805e3ecd4736d3fbebdcb0203d9d6c5b59ff4b: Status 404 returned error can't find the container with id 1713e13a393b82613cab71e6e5805e3ecd4736d3fbebdcb0203d9d6c5b59ff4b Nov 23 14:54:31 crc kubenswrapper[4718]: W1123 14:54:31.714948 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c7e4ce6_8467_4656_9451_4ca2cf5f05e3.slice/crio-dc9668dc6ec934bf8afff7ae437f0d83d83ac62a77920083618722e76fbe16b9 WatchSource:0}: Error finding container dc9668dc6ec934bf8afff7ae437f0d83d83ac62a77920083618722e76fbe16b9: Status 404 returned error can't find the container with id dc9668dc6ec934bf8afff7ae437f0d83d83ac62a77920083618722e76fbe16b9 Nov 23 14:54:32 crc kubenswrapper[4718]: I1123 14:54:32.638238 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-g6g5c" event={"ID":"9c7e4ce6-8467-4656-9451-4ca2cf5f05e3","Type":"ContainerStarted","Data":"dc9668dc6ec934bf8afff7ae437f0d83d83ac62a77920083618722e76fbe16b9"} Nov 23 14:54:32 crc kubenswrapper[4718]: I1123 14:54:32.640219 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" event={"ID":"006f97d3-c32d-4175-b6d1-41f25d854d69","Type":"ContainerStarted","Data":"1713e13a393b82613cab71e6e5805e3ecd4736d3fbebdcb0203d9d6c5b59ff4b"} Nov 23 14:54:34 crc kubenswrapper[4718]: I1123 14:54:34.652787 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" event={"ID":"e6b032f0-b0b8-4db8-af64-ac70e535c9e7","Type":"ContainerStarted","Data":"84b497fd158b98f4cdce1705a1d249a2490f27d588eb777cef274a0286c0a1da"} Nov 23 14:54:34 crc kubenswrapper[4718]: I1123 14:54:34.655358 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" event={"ID":"006f97d3-c32d-4175-b6d1-41f25d854d69","Type":"ContainerStarted","Data":"30dfd095ff776fd74e5f8024992c832e23afb0542a7b9dee72a30004fa92250a"} Nov 23 14:54:34 crc kubenswrapper[4718]: I1123 14:54:34.655478 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" Nov 23 14:54:34 crc kubenswrapper[4718]: I1123 14:54:34.669056 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-dlx5n" podStartSLOduration=2.127133426 podStartE2EDuration="4.669035191s" podCreationTimestamp="2025-11-23 14:54:30 +0000 UTC" firstStartedPulling="2025-11-23 14:54:31.475457701 +0000 UTC m=+522.715077545" lastFinishedPulling="2025-11-23 14:54:34.017359456 +0000 UTC m=+525.256979310" observedRunningTime="2025-11-23 14:54:34.664059508 +0000 UTC m=+525.903679352" watchObservedRunningTime="2025-11-23 14:54:34.669035191 +0000 UTC m=+525.908655045" Nov 23 14:54:34 crc kubenswrapper[4718]: I1123 14:54:34.687749 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" podStartSLOduration=2.291614431 podStartE2EDuration="4.687732956s" podCreationTimestamp="2025-11-23 14:54:30 +0000 UTC" firstStartedPulling="2025-11-23 14:54:31.716588051 +0000 UTC m=+522.956207895" lastFinishedPulling="2025-11-23 14:54:34.112706576 +0000 UTC m=+525.352326420" observedRunningTime="2025-11-23 14:54:34.687311834 +0000 UTC m=+525.926931688" watchObservedRunningTime="2025-11-23 14:54:34.687732956 +0000 UTC m=+525.927352800" Nov 23 14:54:35 crc kubenswrapper[4718]: I1123 14:54:35.661675 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-g6g5c" event={"ID":"9c7e4ce6-8467-4656-9451-4ca2cf5f05e3","Type":"ContainerStarted","Data":"e3e909c67045aeb31659ee6d47356af054f01a60326d4a353b2cc8fbcd4ddcab"} Nov 23 14:54:35 crc kubenswrapper[4718]: I1123 14:54:35.678328 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-g6g5c" podStartSLOduration=2.486946764 podStartE2EDuration="5.678312296s" podCreationTimestamp="2025-11-23 14:54:30 +0000 UTC" firstStartedPulling="2025-11-23 14:54:31.718621705 +0000 UTC m=+522.958241549" lastFinishedPulling="2025-11-23 14:54:34.909987237 +0000 UTC m=+526.149607081" observedRunningTime="2025-11-23 14:54:35.675802618 +0000 UTC m=+526.915422462" watchObservedRunningTime="2025-11-23 14:54:35.678312296 +0000 UTC m=+526.917932140" Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.299766 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-wtn4p" Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.561326 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zjskv"] Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.562078 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-controller" containerID="cri-o://6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.562129 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="nbdb" containerID="cri-o://2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.562203 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="northd" containerID="cri-o://e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.562241 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.562280 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-node" containerID="cri-o://f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.562317 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-acl-logging" containerID="cri-o://5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.562584 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="sbdb" containerID="cri-o://98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.601230 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" containerID="cri-o://c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" gracePeriod=30 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.696704 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/2.log" Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.703525 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovn-acl-logging/0.log" Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704201 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovn-controller/0.log" Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704792 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" exitCode=0 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704822 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" exitCode=0 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704832 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" exitCode=143 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704843 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" exitCode=143 Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704852 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7"} Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704893 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d"} Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704920 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2"} Nov 23 14:54:41 crc kubenswrapper[4718]: I1123 14:54:41.704929 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88"} Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.405519 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/2.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.408331 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovn-acl-logging/0.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.408864 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovn-controller/0.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.409323 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488264 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lpxw8"] Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488510 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="northd" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488524 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="northd" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488538 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488546 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488557 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488565 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488575 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488585 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488597 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="nbdb" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488604 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="nbdb" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488616 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kubecfg-setup" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488624 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kubecfg-setup" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488635 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-acl-logging" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488643 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-acl-logging" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488656 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-node" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488664 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-node" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488677 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488685 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488695 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="sbdb" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488703 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="sbdb" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.488713 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488721 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488853 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="northd" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488868 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="nbdb" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488879 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488890 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488901 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-node" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488911 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovn-acl-logging" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488922 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="sbdb" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488932 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488943 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.488954 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.489065 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.489074 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.489180 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerName="ovnkube-controller" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.491082 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.539954 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-systemd-units\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.539998 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-openvswitch\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540039 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-etc-openvswitch\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540042 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540053 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-slash\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540082 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-slash" (OuterVolumeSpecName: "host-slash") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540114 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540111 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540125 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-node-log\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540154 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-var-lib-openvswitch\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540177 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-netns\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540209 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-log-socket\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540212 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-node-log" (OuterVolumeSpecName: "node-log") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540288 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540232 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540257 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-log-socket" (OuterVolumeSpecName: "log-socket") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540257 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540257 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-netd\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540370 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-systemd\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540393 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-var-lib-cni-networks-ovn-kubernetes\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540425 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovn-node-metrics-cert\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540468 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-kubelet\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540512 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-bin\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540520 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540557 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-script-lib\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540568 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540596 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540600 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv7dr\" (UniqueName: \"kubernetes.io/projected/aa4a9264-1cb9-41bc-a30a-4e09bde21387-kube-api-access-fv7dr\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540633 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-env-overrides\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-ovn-kubernetes\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540749 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-ovn\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.540780 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-config\") pod \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\" (UID: \"aa4a9264-1cb9-41bc-a30a-4e09bde21387\") " Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541021 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541055 4718 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541061 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541076 4718 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-log-socket\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541093 4718 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541113 4718 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541090 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541130 4718 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541148 4718 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541165 4718 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541179 4718 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541182 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541194 4718 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541260 4718 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-slash\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541278 4718 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-node-log\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541294 4718 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.541471 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.546069 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.546176 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4a9264-1cb9-41bc-a30a-4e09bde21387-kube-api-access-fv7dr" (OuterVolumeSpecName: "kube-api-access-fv7dr") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "kube-api-access-fv7dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.565527 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "aa4a9264-1cb9-41bc-a30a-4e09bde21387" (UID: "aa4a9264-1cb9-41bc-a30a-4e09bde21387"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.642694 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.642786 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-cni-netd\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.642820 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.642857 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-systemd\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.642988 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-run-netns\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643074 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-kubelet\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643152 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-etc-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643257 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovn-node-metrics-cert\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643319 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovnkube-config\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643351 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd22v\" (UniqueName: \"kubernetes.io/projected/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-kube-api-access-wd22v\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643498 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-cni-bin\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643558 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovnkube-script-lib\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-slash\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-log-socket\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-ovn\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643709 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-run-ovn-kubernetes\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643741 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-node-log\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643782 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-env-overrides\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643829 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-systemd-units\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643895 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-var-lib-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643974 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643986 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7dr\" (UniqueName: \"kubernetes.io/projected/aa4a9264-1cb9-41bc-a30a-4e09bde21387-kube-api-access-fv7dr\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.643997 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.644007 4718 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.644018 4718 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.644027 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.644049 4718 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa4a9264-1cb9-41bc-a30a-4e09bde21387-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.644057 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa4a9264-1cb9-41bc-a30a-4e09bde21387-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.714281 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qb66k_49e539fc-7a1f-42e0-9a69-230331321d85/kube-multus/1.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.714926 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qb66k_49e539fc-7a1f-42e0-9a69-230331321d85/kube-multus/0.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.714982 4718 generic.go:334] "Generic (PLEG): container finished" podID="49e539fc-7a1f-42e0-9a69-230331321d85" containerID="83e7c0362ec8bf0fdd30ec09b91cc3b684584ac51f4c998ff677399de339d44e" exitCode=2 Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.715049 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qb66k" event={"ID":"49e539fc-7a1f-42e0-9a69-230331321d85","Type":"ContainerDied","Data":"83e7c0362ec8bf0fdd30ec09b91cc3b684584ac51f4c998ff677399de339d44e"} Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.715087 4718 scope.go:117] "RemoveContainer" containerID="d26c582291215f5e4824dde7efbe5838531ec33f600f128e225b7f3cdd0ff59d" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.715718 4718 scope.go:117] "RemoveContainer" containerID="83e7c0362ec8bf0fdd30ec09b91cc3b684584ac51f4c998ff677399de339d44e" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.715996 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qb66k_openshift-multus(49e539fc-7a1f-42e0-9a69-230331321d85)\"" pod="openshift-multus/multus-qb66k" podUID="49e539fc-7a1f-42e0-9a69-230331321d85" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.718976 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovnkube-controller/2.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.724825 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovn-acl-logging/0.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725466 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zjskv_aa4a9264-1cb9-41bc-a30a-4e09bde21387/ovn-controller/0.log" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725862 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" exitCode=0 Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725893 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" exitCode=0 Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725904 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" exitCode=0 Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725917 4718 generic.go:334] "Generic (PLEG): container finished" podID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" containerID="e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" exitCode=0 Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725915 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387"} Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725959 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6"} Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725985 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3"} Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.725999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159"} Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.726018 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjskv" event={"ID":"aa4a9264-1cb9-41bc-a30a-4e09bde21387","Type":"ContainerDied","Data":"59aee2898c5a453dcf6f6f076db7a9ad00a9b668d73cc35b1bccb8d6f70dc33b"} Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.744805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-cni-bin\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.744934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovnkube-script-lib\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.744962 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-slash\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745013 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-log-socket\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745034 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-ovn\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745054 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-run-ovn-kubernetes\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745093 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-node-log\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745112 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-env-overrides\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745129 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-systemd-units\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745184 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-var-lib-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745206 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745220 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-slash\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745249 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-node-log\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745226 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-cni-netd\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745284 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-log-socket\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745285 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-cni-netd\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745297 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745337 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-systemd\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-var-lib-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745374 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-run-netns\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745404 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-kubelet\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745402 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745507 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-ovn\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745539 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-etc-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745558 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-run-ovn-kubernetes\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745577 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovn-node-metrics-cert\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745591 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovnkube-config\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745661 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd22v\" (UniqueName: \"kubernetes.io/projected/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-kube-api-access-wd22v\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745718 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-cni-bin\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745321 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-systemd-units\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745793 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-run-systemd\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745804 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-run-netns\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745821 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-env-overrides\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.745844 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-etc-openvswitch\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.746272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovnkube-config\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.746421 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-host-kubelet\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.747185 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovnkube-script-lib\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.751113 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-ovn-node-metrics-cert\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.759846 4718 scope.go:117] "RemoveContainer" containerID="c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.768006 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zjskv"] Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.769089 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd22v\" (UniqueName: \"kubernetes.io/projected/bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd-kube-api-access-wd22v\") pod \"ovnkube-node-lpxw8\" (UID: \"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd\") " pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.774960 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zjskv"] Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.798589 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.811615 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.813849 4718 scope.go:117] "RemoveContainer" containerID="98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.834253 4718 scope.go:117] "RemoveContainer" containerID="2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" Nov 23 14:54:42 crc kubenswrapper[4718]: W1123 14:54:42.838097 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf19ca8d_cf5b_4ce2_9db9_a8e76188ffbd.slice/crio-9fbac3583db22272113949ee7dfe3973e7beeebf50d4fc3f3ccf4fa77c1f3cef WatchSource:0}: Error finding container 9fbac3583db22272113949ee7dfe3973e7beeebf50d4fc3f3ccf4fa77c1f3cef: Status 404 returned error can't find the container with id 9fbac3583db22272113949ee7dfe3973e7beeebf50d4fc3f3ccf4fa77c1f3cef Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.850695 4718 scope.go:117] "RemoveContainer" containerID="e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.862694 4718 scope.go:117] "RemoveContainer" containerID="9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.874460 4718 scope.go:117] "RemoveContainer" containerID="f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.896087 4718 scope.go:117] "RemoveContainer" containerID="5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.915166 4718 scope.go:117] "RemoveContainer" containerID="6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.931488 4718 scope.go:117] "RemoveContainer" containerID="684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.949347 4718 scope.go:117] "RemoveContainer" containerID="c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.950400 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": container with ID starting with c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387 not found: ID does not exist" containerID="c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.950493 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387"} err="failed to get container status \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": rpc error: code = NotFound desc = could not find container \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": container with ID starting with c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.950536 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.951086 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": container with ID starting with ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff not found: ID does not exist" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.951127 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff"} err="failed to get container status \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": rpc error: code = NotFound desc = could not find container \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": container with ID starting with ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.951163 4718 scope.go:117] "RemoveContainer" containerID="98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.951787 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": container with ID starting with 98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6 not found: ID does not exist" containerID="98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.951839 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6"} err="failed to get container status \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": rpc error: code = NotFound desc = could not find container \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": container with ID starting with 98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.951857 4718 scope.go:117] "RemoveContainer" containerID="2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.952261 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": container with ID starting with 2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3 not found: ID does not exist" containerID="2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.952311 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3"} err="failed to get container status \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": rpc error: code = NotFound desc = could not find container \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": container with ID starting with 2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.952335 4718 scope.go:117] "RemoveContainer" containerID="e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.952714 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": container with ID starting with e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159 not found: ID does not exist" containerID="e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.952752 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159"} err="failed to get container status \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": rpc error: code = NotFound desc = could not find container \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": container with ID starting with e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.952776 4718 scope.go:117] "RemoveContainer" containerID="9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.953115 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": container with ID starting with 9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7 not found: ID does not exist" containerID="9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.953149 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7"} err="failed to get container status \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": rpc error: code = NotFound desc = could not find container \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": container with ID starting with 9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.953169 4718 scope.go:117] "RemoveContainer" containerID="f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.953674 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": container with ID starting with f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d not found: ID does not exist" containerID="f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.953714 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d"} err="failed to get container status \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": rpc error: code = NotFound desc = could not find container \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": container with ID starting with f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.953759 4718 scope.go:117] "RemoveContainer" containerID="5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.954382 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": container with ID starting with 5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2 not found: ID does not exist" containerID="5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.954428 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2"} err="failed to get container status \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": rpc error: code = NotFound desc = could not find container \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": container with ID starting with 5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.955852 4718 scope.go:117] "RemoveContainer" containerID="6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.956240 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": container with ID starting with 6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88 not found: ID does not exist" containerID="6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.956277 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88"} err="failed to get container status \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": rpc error: code = NotFound desc = could not find container \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": container with ID starting with 6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.956297 4718 scope.go:117] "RemoveContainer" containerID="684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14" Nov 23 14:54:42 crc kubenswrapper[4718]: E1123 14:54:42.956588 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": container with ID starting with 684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14 not found: ID does not exist" containerID="684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.956628 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14"} err="failed to get container status \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": rpc error: code = NotFound desc = could not find container \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": container with ID starting with 684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.956657 4718 scope.go:117] "RemoveContainer" containerID="c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.957185 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387"} err="failed to get container status \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": rpc error: code = NotFound desc = could not find container \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": container with ID starting with c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.957226 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.957892 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff"} err="failed to get container status \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": rpc error: code = NotFound desc = could not find container \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": container with ID starting with ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.957933 4718 scope.go:117] "RemoveContainer" containerID="98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.958281 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6"} err="failed to get container status \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": rpc error: code = NotFound desc = could not find container \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": container with ID starting with 98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.958318 4718 scope.go:117] "RemoveContainer" containerID="2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.959377 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3"} err="failed to get container status \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": rpc error: code = NotFound desc = could not find container \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": container with ID starting with 2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.959411 4718 scope.go:117] "RemoveContainer" containerID="e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.959790 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159"} err="failed to get container status \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": rpc error: code = NotFound desc = could not find container \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": container with ID starting with e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.959828 4718 scope.go:117] "RemoveContainer" containerID="9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.960148 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7"} err="failed to get container status \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": rpc error: code = NotFound desc = could not find container \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": container with ID starting with 9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.960174 4718 scope.go:117] "RemoveContainer" containerID="f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.960555 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d"} err="failed to get container status \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": rpc error: code = NotFound desc = could not find container \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": container with ID starting with f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.960582 4718 scope.go:117] "RemoveContainer" containerID="5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.960855 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2"} err="failed to get container status \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": rpc error: code = NotFound desc = could not find container \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": container with ID starting with 5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.960878 4718 scope.go:117] "RemoveContainer" containerID="6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.961113 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88"} err="failed to get container status \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": rpc error: code = NotFound desc = could not find container \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": container with ID starting with 6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.961136 4718 scope.go:117] "RemoveContainer" containerID="684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.961551 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14"} err="failed to get container status \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": rpc error: code = NotFound desc = could not find container \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": container with ID starting with 684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.961581 4718 scope.go:117] "RemoveContainer" containerID="c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.961764 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387"} err="failed to get container status \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": rpc error: code = NotFound desc = could not find container \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": container with ID starting with c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.961790 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.962075 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff"} err="failed to get container status \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": rpc error: code = NotFound desc = could not find container \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": container with ID starting with ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.962097 4718 scope.go:117] "RemoveContainer" containerID="98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.962461 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6"} err="failed to get container status \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": rpc error: code = NotFound desc = could not find container \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": container with ID starting with 98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.962496 4718 scope.go:117] "RemoveContainer" containerID="2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.962788 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3"} err="failed to get container status \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": rpc error: code = NotFound desc = could not find container \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": container with ID starting with 2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.962825 4718 scope.go:117] "RemoveContainer" containerID="e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.963024 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159"} err="failed to get container status \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": rpc error: code = NotFound desc = could not find container \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": container with ID starting with e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.963045 4718 scope.go:117] "RemoveContainer" containerID="9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.963214 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7"} err="failed to get container status \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": rpc error: code = NotFound desc = could not find container \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": container with ID starting with 9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.963236 4718 scope.go:117] "RemoveContainer" containerID="f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.963641 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d"} err="failed to get container status \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": rpc error: code = NotFound desc = could not find container \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": container with ID starting with f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.963666 4718 scope.go:117] "RemoveContainer" containerID="5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.963988 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2"} err="failed to get container status \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": rpc error: code = NotFound desc = could not find container \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": container with ID starting with 5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.964012 4718 scope.go:117] "RemoveContainer" containerID="6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.964371 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88"} err="failed to get container status \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": rpc error: code = NotFound desc = could not find container \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": container with ID starting with 6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.964397 4718 scope.go:117] "RemoveContainer" containerID="684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.964757 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14"} err="failed to get container status \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": rpc error: code = NotFound desc = could not find container \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": container with ID starting with 684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.964791 4718 scope.go:117] "RemoveContainer" containerID="c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.965089 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387"} err="failed to get container status \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": rpc error: code = NotFound desc = could not find container \"c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387\": container with ID starting with c1809618031d9feff7a50eae28af1e1366af09869ccc0b4faba572b0ecc4c387 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.965132 4718 scope.go:117] "RemoveContainer" containerID="ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.965421 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff"} err="failed to get container status \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": rpc error: code = NotFound desc = could not find container \"ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff\": container with ID starting with ac0e8f0cd14780374a230d9d6d8617452b9500921d7737d94829a08481a0ebff not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.965474 4718 scope.go:117] "RemoveContainer" containerID="98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.965852 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6"} err="failed to get container status \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": rpc error: code = NotFound desc = could not find container \"98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6\": container with ID starting with 98ea2ebbcd658a8fa9e65e9a817bef7cf94bd0c852cf9dab4293321917bcc3e6 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.965879 4718 scope.go:117] "RemoveContainer" containerID="2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.966121 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3"} err="failed to get container status \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": rpc error: code = NotFound desc = could not find container \"2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3\": container with ID starting with 2a9692601c82cb3545b07c1140e5a75c5ea861eeb3a2c040dd7af3dada6a25f3 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.966143 4718 scope.go:117] "RemoveContainer" containerID="e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.966457 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159"} err="failed to get container status \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": rpc error: code = NotFound desc = could not find container \"e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159\": container with ID starting with e0070c15a1c66ae61df2a66d82639a65f906fe27b1983165070d90963bfbb159 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.966481 4718 scope.go:117] "RemoveContainer" containerID="9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.966755 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7"} err="failed to get container status \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": rpc error: code = NotFound desc = could not find container \"9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7\": container with ID starting with 9eb77d79d3648306df6bcc42db2ac10af9a1bfa1022a642ede671b2a92a6b8e7 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.966777 4718 scope.go:117] "RemoveContainer" containerID="f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.967055 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d"} err="failed to get container status \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": rpc error: code = NotFound desc = could not find container \"f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d\": container with ID starting with f60c03301b32a6dad551c1099f57e8be94d45ec6ae972520c5c66b7e788ad87d not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.967083 4718 scope.go:117] "RemoveContainer" containerID="5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.967390 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2"} err="failed to get container status \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": rpc error: code = NotFound desc = could not find container \"5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2\": container with ID starting with 5b58842b4ff12bd84e43cdeb7d93cca1e39239c8a83bf8139b8e6adb06d847c2 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.967420 4718 scope.go:117] "RemoveContainer" containerID="6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.967713 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88"} err="failed to get container status \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": rpc error: code = NotFound desc = could not find container \"6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88\": container with ID starting with 6b03a39209158eca732e01032419cf00cd44c2b5c8fc5fc47309a4d785af1a88 not found: ID does not exist" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.967748 4718 scope.go:117] "RemoveContainer" containerID="684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14" Nov 23 14:54:42 crc kubenswrapper[4718]: I1123 14:54:42.967991 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14"} err="failed to get container status \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": rpc error: code = NotFound desc = could not find container \"684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14\": container with ID starting with 684c6563606b90d89efe35bcbb3f55f2dece327b5a33398211305d1404f41d14 not found: ID does not exist" Nov 23 14:54:43 crc kubenswrapper[4718]: I1123 14:54:43.734067 4718 generic.go:334] "Generic (PLEG): container finished" podID="bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd" containerID="0064bb1c602b08c718da44f0581604ee3a119797cf856593866f1e4b1c036839" exitCode=0 Nov 23 14:54:43 crc kubenswrapper[4718]: I1123 14:54:43.734166 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerDied","Data":"0064bb1c602b08c718da44f0581604ee3a119797cf856593866f1e4b1c036839"} Nov 23 14:54:43 crc kubenswrapper[4718]: I1123 14:54:43.735432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"9fbac3583db22272113949ee7dfe3973e7beeebf50d4fc3f3ccf4fa77c1f3cef"} Nov 23 14:54:43 crc kubenswrapper[4718]: I1123 14:54:43.738130 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qb66k_49e539fc-7a1f-42e0-9a69-230331321d85/kube-multus/1.log" Nov 23 14:54:44 crc kubenswrapper[4718]: I1123 14:54:44.451003 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa4a9264-1cb9-41bc-a30a-4e09bde21387" path="/var/lib/kubelet/pods/aa4a9264-1cb9-41bc-a30a-4e09bde21387/volumes" Nov 23 14:54:44 crc kubenswrapper[4718]: I1123 14:54:44.750568 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"ad4338ee53f618feb35ffacec1e0491c19c2712ef4a40f71a3ee124e79597c54"} Nov 23 14:54:44 crc kubenswrapper[4718]: I1123 14:54:44.750619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"63254e2d3ce9c7028c8436f8455da8a0a02ab22fbf4ede77decb904f30b011bd"} Nov 23 14:54:44 crc kubenswrapper[4718]: I1123 14:54:44.750634 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"0b47bd1e3e6cf15b18f24a4b9a312c655bef3c89739124165040e9d7ec9e9d0e"} Nov 23 14:54:44 crc kubenswrapper[4718]: I1123 14:54:44.750645 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"29fd836efe3c57849bdc7bbb1470a3604558cedc97e939c3d1ebc2bbb28b8cc8"} Nov 23 14:54:44 crc kubenswrapper[4718]: I1123 14:54:44.750657 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"a749736afb5980b9694714a077bde6e35b27943da4478116283ad024d4a42896"} Nov 23 14:54:44 crc kubenswrapper[4718]: I1123 14:54:44.750668 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"03732543401e8bdba45f70ee0b47675db2e2c4194527f61c5717938a6f3e633a"} Nov 23 14:54:46 crc kubenswrapper[4718]: I1123 14:54:46.765960 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"94be2a00ec7e15df4d47fda46ffb3002e112d7889b1615b05d54c42fcfa8c9f9"} Nov 23 14:54:49 crc kubenswrapper[4718]: I1123 14:54:49.784912 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" event={"ID":"bf19ca8d-cf5b-4ce2-9db9-a8e76188ffbd","Type":"ContainerStarted","Data":"4306ea4384e8553517672b992c0197808ec90de49b7a79125c047d2a79d99990"} Nov 23 14:54:49 crc kubenswrapper[4718]: I1123 14:54:49.785520 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:49 crc kubenswrapper[4718]: I1123 14:54:49.785720 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:49 crc kubenswrapper[4718]: I1123 14:54:49.785769 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:49 crc kubenswrapper[4718]: I1123 14:54:49.807788 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:49 crc kubenswrapper[4718]: I1123 14:54:49.819366 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" podStartSLOduration=7.819341579 podStartE2EDuration="7.819341579s" podCreationTimestamp="2025-11-23 14:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:54:49.814002315 +0000 UTC m=+541.053622169" watchObservedRunningTime="2025-11-23 14:54:49.819341579 +0000 UTC m=+541.058961433" Nov 23 14:54:49 crc kubenswrapper[4718]: I1123 14:54:49.824431 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:54:53 crc kubenswrapper[4718]: I1123 14:54:53.053601 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:54:53 crc kubenswrapper[4718]: I1123 14:54:53.053937 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:54:58 crc kubenswrapper[4718]: I1123 14:54:58.440586 4718 scope.go:117] "RemoveContainer" containerID="83e7c0362ec8bf0fdd30ec09b91cc3b684584ac51f4c998ff677399de339d44e" Nov 23 14:54:58 crc kubenswrapper[4718]: I1123 14:54:58.838396 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qb66k_49e539fc-7a1f-42e0-9a69-230331321d85/kube-multus/1.log" Nov 23 14:54:58 crc kubenswrapper[4718]: I1123 14:54:58.838788 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qb66k" event={"ID":"49e539fc-7a1f-42e0-9a69-230331321d85","Type":"ContainerStarted","Data":"0f2243c8b9317f2eb9814c2fa806aae4a3b27031d6870ad961e22ba2d33b086c"} Nov 23 14:55:12 crc kubenswrapper[4718]: I1123 14:55:12.839229 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lpxw8" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.324180 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2"] Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.326697 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.329605 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.337642 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2"] Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.443623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8fbr\" (UniqueName: \"kubernetes.io/projected/363974b8-229c-43e4-85cf-a7e4187ed8d4-kube-api-access-b8fbr\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.443997 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.444047 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.545169 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.545257 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.545365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8fbr\" (UniqueName: \"kubernetes.io/projected/363974b8-229c-43e4-85cf-a7e4187ed8d4-kube-api-access-b8fbr\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.546082 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.546244 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.580003 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8fbr\" (UniqueName: \"kubernetes.io/projected/363974b8-229c-43e4-85cf-a7e4187ed8d4-kube-api-access-b8fbr\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.666126 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.874575 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2"] Nov 23 14:55:20 crc kubenswrapper[4718]: I1123 14:55:20.973413 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" event={"ID":"363974b8-229c-43e4-85cf-a7e4187ed8d4","Type":"ContainerStarted","Data":"5663d30315b6c455e50262a87abaee34aa80a51a73dacb3108d34e2d177783d6"} Nov 23 14:55:21 crc kubenswrapper[4718]: I1123 14:55:21.981086 4718 generic.go:334] "Generic (PLEG): container finished" podID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerID="3289eedbc36d8f9032982142893ea6b49b8758afaa607f4aa34be414e82bafe1" exitCode=0 Nov 23 14:55:21 crc kubenswrapper[4718]: I1123 14:55:21.981210 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" event={"ID":"363974b8-229c-43e4-85cf-a7e4187ed8d4","Type":"ContainerDied","Data":"3289eedbc36d8f9032982142893ea6b49b8758afaa607f4aa34be414e82bafe1"} Nov 23 14:55:23 crc kubenswrapper[4718]: I1123 14:55:23.052871 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:55:23 crc kubenswrapper[4718]: I1123 14:55:23.052945 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:55:23 crc kubenswrapper[4718]: I1123 14:55:23.053004 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:55:23 crc kubenswrapper[4718]: I1123 14:55:23.054011 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e69f70c6d8e0cb888bc50f00b9d126d21fc061450a3827ee8309102066c2eb2c"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 14:55:23 crc kubenswrapper[4718]: I1123 14:55:23.054135 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://e69f70c6d8e0cb888bc50f00b9d126d21fc061450a3827ee8309102066c2eb2c" gracePeriod=600 Nov 23 14:55:23 crc kubenswrapper[4718]: I1123 14:55:23.994837 4718 generic.go:334] "Generic (PLEG): container finished" podID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerID="e81108f1140a469e66947d828d6e25509e458496a6f5a41a42cff9f37e0bf5d7" exitCode=0 Nov 23 14:55:23 crc kubenswrapper[4718]: I1123 14:55:23.994939 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" event={"ID":"363974b8-229c-43e4-85cf-a7e4187ed8d4","Type":"ContainerDied","Data":"e81108f1140a469e66947d828d6e25509e458496a6f5a41a42cff9f37e0bf5d7"} Nov 23 14:55:24 crc kubenswrapper[4718]: I1123 14:55:24.000132 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="e69f70c6d8e0cb888bc50f00b9d126d21fc061450a3827ee8309102066c2eb2c" exitCode=0 Nov 23 14:55:24 crc kubenswrapper[4718]: I1123 14:55:24.000171 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"e69f70c6d8e0cb888bc50f00b9d126d21fc061450a3827ee8309102066c2eb2c"} Nov 23 14:55:24 crc kubenswrapper[4718]: I1123 14:55:24.000200 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"e672c46344261e8af36921b484538b589267121731870b982143bf0d2e76b2f1"} Nov 23 14:55:24 crc kubenswrapper[4718]: I1123 14:55:24.000216 4718 scope.go:117] "RemoveContainer" containerID="ce79aba3ac30f4f8a186f32435648dbf5a20f89ecec480680ab901eafede0c18" Nov 23 14:55:25 crc kubenswrapper[4718]: I1123 14:55:25.011358 4718 generic.go:334] "Generic (PLEG): container finished" podID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerID="4f43ad348fabd32120f1bc6d99fd6f16c1998ca2267827280dd30f01835ed65d" exitCode=0 Nov 23 14:55:25 crc kubenswrapper[4718]: I1123 14:55:25.011812 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" event={"ID":"363974b8-229c-43e4-85cf-a7e4187ed8d4","Type":"ContainerDied","Data":"4f43ad348fabd32120f1bc6d99fd6f16c1998ca2267827280dd30f01835ed65d"} Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.265256 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.425574 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8fbr\" (UniqueName: \"kubernetes.io/projected/363974b8-229c-43e4-85cf-a7e4187ed8d4-kube-api-access-b8fbr\") pod \"363974b8-229c-43e4-85cf-a7e4187ed8d4\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.425827 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-bundle\") pod \"363974b8-229c-43e4-85cf-a7e4187ed8d4\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.425947 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-util\") pod \"363974b8-229c-43e4-85cf-a7e4187ed8d4\" (UID: \"363974b8-229c-43e4-85cf-a7e4187ed8d4\") " Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.426723 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-bundle" (OuterVolumeSpecName: "bundle") pod "363974b8-229c-43e4-85cf-a7e4187ed8d4" (UID: "363974b8-229c-43e4-85cf-a7e4187ed8d4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.433829 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363974b8-229c-43e4-85cf-a7e4187ed8d4-kube-api-access-b8fbr" (OuterVolumeSpecName: "kube-api-access-b8fbr") pod "363974b8-229c-43e4-85cf-a7e4187ed8d4" (UID: "363974b8-229c-43e4-85cf-a7e4187ed8d4"). InnerVolumeSpecName "kube-api-access-b8fbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.449562 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-util" (OuterVolumeSpecName: "util") pod "363974b8-229c-43e4-85cf-a7e4187ed8d4" (UID: "363974b8-229c-43e4-85cf-a7e4187ed8d4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.527393 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-util\") on node \"crc\" DevicePath \"\"" Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.527491 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8fbr\" (UniqueName: \"kubernetes.io/projected/363974b8-229c-43e4-85cf-a7e4187ed8d4-kube-api-access-b8fbr\") on node \"crc\" DevicePath \"\"" Nov 23 14:55:26 crc kubenswrapper[4718]: I1123 14:55:26.527514 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/363974b8-229c-43e4-85cf-a7e4187ed8d4-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:55:27 crc kubenswrapper[4718]: I1123 14:55:27.029054 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" event={"ID":"363974b8-229c-43e4-85cf-a7e4187ed8d4","Type":"ContainerDied","Data":"5663d30315b6c455e50262a87abaee34aa80a51a73dacb3108d34e2d177783d6"} Nov 23 14:55:27 crc kubenswrapper[4718]: I1123 14:55:27.029097 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5663d30315b6c455e50262a87abaee34aa80a51a73dacb3108d34e2d177783d6" Nov 23 14:55:27 crc kubenswrapper[4718]: I1123 14:55:27.029131 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.920241 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vj25n"] Nov 23 14:55:31 crc kubenswrapper[4718]: E1123 14:55:31.920973 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerName="util" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.920987 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerName="util" Nov 23 14:55:31 crc kubenswrapper[4718]: E1123 14:55:31.921005 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerName="extract" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.921010 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerName="extract" Nov 23 14:55:31 crc kubenswrapper[4718]: E1123 14:55:31.921019 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerName="pull" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.921025 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerName="pull" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.921128 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="363974b8-229c-43e4-85cf-a7e4187ed8d4" containerName="extract" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.921496 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.923778 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.924053 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.925926 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bfgb7" Nov 23 14:55:31 crc kubenswrapper[4718]: I1123 14:55:31.943433 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vj25n"] Nov 23 14:55:32 crc kubenswrapper[4718]: I1123 14:55:32.099858 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d79wq\" (UniqueName: \"kubernetes.io/projected/62a579ff-5a89-4c20-afea-9419bd3bc1a0-kube-api-access-d79wq\") pod \"nmstate-operator-557fdffb88-vj25n\" (UID: \"62a579ff-5a89-4c20-afea-9419bd3bc1a0\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" Nov 23 14:55:32 crc kubenswrapper[4718]: I1123 14:55:32.201811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d79wq\" (UniqueName: \"kubernetes.io/projected/62a579ff-5a89-4c20-afea-9419bd3bc1a0-kube-api-access-d79wq\") pod \"nmstate-operator-557fdffb88-vj25n\" (UID: \"62a579ff-5a89-4c20-afea-9419bd3bc1a0\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" Nov 23 14:55:32 crc kubenswrapper[4718]: I1123 14:55:32.224629 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d79wq\" (UniqueName: \"kubernetes.io/projected/62a579ff-5a89-4c20-afea-9419bd3bc1a0-kube-api-access-d79wq\") pod \"nmstate-operator-557fdffb88-vj25n\" (UID: \"62a579ff-5a89-4c20-afea-9419bd3bc1a0\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" Nov 23 14:55:32 crc kubenswrapper[4718]: I1123 14:55:32.237591 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" Nov 23 14:55:32 crc kubenswrapper[4718]: I1123 14:55:32.451722 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vj25n"] Nov 23 14:55:33 crc kubenswrapper[4718]: I1123 14:55:33.059513 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" event={"ID":"62a579ff-5a89-4c20-afea-9419bd3bc1a0","Type":"ContainerStarted","Data":"08d2fbd1c47af2fbb58e1a14ba101be2788fcdf2d9592aa7c409b88965565b6b"} Nov 23 14:55:35 crc kubenswrapper[4718]: I1123 14:55:35.072226 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" event={"ID":"62a579ff-5a89-4c20-afea-9419bd3bc1a0","Type":"ContainerStarted","Data":"993c5191bc6c97c5f6992455e48f922d85789b2f70d82aaa48f507cd7d4c0afe"} Nov 23 14:55:35 crc kubenswrapper[4718]: I1123 14:55:35.092500 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-vj25n" podStartSLOduration=2.02160472 podStartE2EDuration="4.09247211s" podCreationTimestamp="2025-11-23 14:55:31 +0000 UTC" firstStartedPulling="2025-11-23 14:55:32.458063468 +0000 UTC m=+583.697683312" lastFinishedPulling="2025-11-23 14:55:34.528930858 +0000 UTC m=+585.768550702" observedRunningTime="2025-11-23 14:55:35.090876546 +0000 UTC m=+586.330496450" watchObservedRunningTime="2025-11-23 14:55:35.09247211 +0000 UTC m=+586.332092004" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.355145 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.356696 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.358823 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fg244" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.366404 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.382225 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-z9n6m"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.383129 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.387899 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.388777 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.394464 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.408666 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.491532 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.492141 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.495850 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-f46ph" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.495974 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.495999 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.501107 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-dbus-socket\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.501202 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgt4\" (UniqueName: \"kubernetes.io/projected/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-kube-api-access-7xgt4\") pod \"nmstate-webhook-6b89b748d8-t5mgc\" (UID: \"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.501244 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw84v\" (UniqueName: \"kubernetes.io/projected/a6e7f145-b630-4486-9a8d-e08d114c3f0a-kube-api-access-pw84v\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.501265 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-ovs-socket\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.501293 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlx9d\" (UniqueName: \"kubernetes.io/projected/955636b8-9879-4e63-a399-6ac037c1fcd5-kube-api-access-dlx9d\") pod \"nmstate-metrics-5dcf9c57c5-65tj8\" (UID: \"955636b8-9879-4e63-a399-6ac037c1fcd5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.501416 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-nmstate-lock\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.501509 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-t5mgc\" (UID: \"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.506771 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.602763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgt4\" (UniqueName: \"kubernetes.io/projected/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-kube-api-access-7xgt4\") pod \"nmstate-webhook-6b89b748d8-t5mgc\" (UID: \"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.602837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.602866 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw84v\" (UniqueName: \"kubernetes.io/projected/a6e7f145-b630-4486-9a8d-e08d114c3f0a-kube-api-access-pw84v\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.602896 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-ovs-socket\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.602918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlx9d\" (UniqueName: \"kubernetes.io/projected/955636b8-9879-4e63-a399-6ac037c1fcd5-kube-api-access-dlx9d\") pod \"nmstate-metrics-5dcf9c57c5-65tj8\" (UID: \"955636b8-9879-4e63-a399-6ac037c1fcd5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.602961 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-nmstate-lock\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.602993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-t5mgc\" (UID: \"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.603032 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-dbus-socket\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.603055 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5bh\" (UniqueName: \"kubernetes.io/projected/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-kube-api-access-sr5bh\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: E1123 14:55:40.603218 4718 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 23 14:55:40 crc kubenswrapper[4718]: E1123 14:55:40.603289 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-tls-key-pair podName:21067045-a7f2-4d80-a1fe-d2c15d3b7ee9 nodeName:}" failed. No retries permitted until 2025-11-23 14:55:41.103269813 +0000 UTC m=+592.342889657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-tls-key-pair") pod "nmstate-webhook-6b89b748d8-t5mgc" (UID: "21067045-a7f2-4d80-a1fe-d2c15d3b7ee9") : secret "openshift-nmstate-webhook" not found Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.603415 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-nmstate-lock\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.603567 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.603572 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-ovs-socket\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.603727 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6e7f145-b630-4486-9a8d-e08d114c3f0a-dbus-socket\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.637692 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw84v\" (UniqueName: \"kubernetes.io/projected/a6e7f145-b630-4486-9a8d-e08d114c3f0a-kube-api-access-pw84v\") pod \"nmstate-handler-z9n6m\" (UID: \"a6e7f145-b630-4486-9a8d-e08d114c3f0a\") " pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.641545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgt4\" (UniqueName: \"kubernetes.io/projected/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-kube-api-access-7xgt4\") pod \"nmstate-webhook-6b89b748d8-t5mgc\" (UID: \"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.646577 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlx9d\" (UniqueName: \"kubernetes.io/projected/955636b8-9879-4e63-a399-6ac037c1fcd5-kube-api-access-dlx9d\") pod \"nmstate-metrics-5dcf9c57c5-65tj8\" (UID: \"955636b8-9879-4e63-a399-6ac037c1fcd5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.675137 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.704927 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55499b9ddd-52z7q"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.705418 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.705653 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.705819 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr5bh\" (UniqueName: \"kubernetes.io/projected/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-kube-api-access-sr5bh\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: E1123 14:55:40.705826 4718 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 23 14:55:40 crc kubenswrapper[4718]: E1123 14:55:40.706111 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-plugin-serving-cert podName:36e9d48f-9291-4ffd-8ca3-342d488e8bc2 nodeName:}" failed. No retries permitted until 2025-11-23 14:55:41.206089782 +0000 UTC m=+592.445709626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-lvlqg" (UID: "36e9d48f-9291-4ffd-8ca3-342d488e8bc2") : secret "plugin-serving-cert" not found Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.705557 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.706455 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.706497 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.723083 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55499b9ddd-52z7q"] Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.731298 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr5bh\" (UniqueName: \"kubernetes.io/projected/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-kube-api-access-sr5bh\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:40 crc kubenswrapper[4718]: W1123 14:55:40.746864 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6e7f145_b630_4486_9a8d_e08d114c3f0a.slice/crio-4d9dabf13b8eb7f23e73756139a4e1fe59beacbfd340f77c0557d549afc37c45 WatchSource:0}: Error finding container 4d9dabf13b8eb7f23e73756139a4e1fe59beacbfd340f77c0557d549afc37c45: Status 404 returned error can't find the container with id 4d9dabf13b8eb7f23e73756139a4e1fe59beacbfd340f77c0557d549afc37c45 Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.807097 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-oauth-serving-cert\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.807609 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-service-ca\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.807729 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-console-config\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.807846 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/33f23e32-773d-4cb1-b50d-102503baf192-console-serving-cert\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.807944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-trusted-ca-bundle\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.808059 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vvk6\" (UniqueName: \"kubernetes.io/projected/33f23e32-773d-4cb1-b50d-102503baf192-kube-api-access-2vvk6\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.808145 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/33f23e32-773d-4cb1-b50d-102503baf192-console-oauth-config\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.910234 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-console-config\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.910284 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/33f23e32-773d-4cb1-b50d-102503baf192-console-serving-cert\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.910306 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-trusted-ca-bundle\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.910373 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vvk6\" (UniqueName: \"kubernetes.io/projected/33f23e32-773d-4cb1-b50d-102503baf192-kube-api-access-2vvk6\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.910402 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/33f23e32-773d-4cb1-b50d-102503baf192-console-oauth-config\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.910483 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-oauth-serving-cert\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.910536 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-service-ca\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.911735 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-service-ca\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.912382 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-oauth-serving-cert\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.913966 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-console-config\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.914789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/33f23e32-773d-4cb1-b50d-102503baf192-console-oauth-config\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.915865 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33f23e32-773d-4cb1-b50d-102503baf192-trusted-ca-bundle\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.922984 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/33f23e32-773d-4cb1-b50d-102503baf192-console-serving-cert\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:40 crc kubenswrapper[4718]: I1123 14:55:40.930314 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vvk6\" (UniqueName: \"kubernetes.io/projected/33f23e32-773d-4cb1-b50d-102503baf192-kube-api-access-2vvk6\") pod \"console-55499b9ddd-52z7q\" (UID: \"33f23e32-773d-4cb1-b50d-102503baf192\") " pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.079231 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.105577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z9n6m" event={"ID":"a6e7f145-b630-4486-9a8d-e08d114c3f0a","Type":"ContainerStarted","Data":"4d9dabf13b8eb7f23e73756139a4e1fe59beacbfd340f77c0557d549afc37c45"} Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.116215 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-t5mgc\" (UID: \"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.119712 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21067045-a7f2-4d80-a1fe-d2c15d3b7ee9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-t5mgc\" (UID: \"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.159548 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8"] Nov 23 14:55:41 crc kubenswrapper[4718]: W1123 14:55:41.169812 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod955636b8_9879_4e63_a399_6ac037c1fcd5.slice/crio-9bb07442013bd9b2f367e7a8b6727d4f9c3f67480ab966bc0323a0ab4366319d WatchSource:0}: Error finding container 9bb07442013bd9b2f367e7a8b6727d4f9c3f67480ab966bc0323a0ab4366319d: Status 404 returned error can't find the container with id 9bb07442013bd9b2f367e7a8b6727d4f9c3f67480ab966bc0323a0ab4366319d Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.217296 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.221046 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/36e9d48f-9291-4ffd-8ca3-342d488e8bc2-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-lvlqg\" (UID: \"36e9d48f-9291-4ffd-8ca3-342d488e8bc2\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.259391 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55499b9ddd-52z7q"] Nov 23 14:55:41 crc kubenswrapper[4718]: W1123 14:55:41.262782 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f23e32_773d_4cb1_b50d_102503baf192.slice/crio-a63e7b9ddab27a361c9489e1808c21776668e421531a0b3faba41e8e1bc46324 WatchSource:0}: Error finding container a63e7b9ddab27a361c9489e1808c21776668e421531a0b3faba41e8e1bc46324: Status 404 returned error can't find the container with id a63e7b9ddab27a361c9489e1808c21776668e421531a0b3faba41e8e1bc46324 Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.316262 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.406081 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.506332 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc"] Nov 23 14:55:41 crc kubenswrapper[4718]: W1123 14:55:41.513064 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21067045_a7f2_4d80_a1fe_d2c15d3b7ee9.slice/crio-85e6f380de02313982422b0821b465c311cab240fbffbd4903d518193ce66fcb WatchSource:0}: Error finding container 85e6f380de02313982422b0821b465c311cab240fbffbd4903d518193ce66fcb: Status 404 returned error can't find the container with id 85e6f380de02313982422b0821b465c311cab240fbffbd4903d518193ce66fcb Nov 23 14:55:41 crc kubenswrapper[4718]: I1123 14:55:41.604532 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg"] Nov 23 14:55:41 crc kubenswrapper[4718]: W1123 14:55:41.610071 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36e9d48f_9291_4ffd_8ca3_342d488e8bc2.slice/crio-3b6dacfaa5ff73d4ccfed28847d6144edec73a786dd1174d556eda544f8c18e3 WatchSource:0}: Error finding container 3b6dacfaa5ff73d4ccfed28847d6144edec73a786dd1174d556eda544f8c18e3: Status 404 returned error can't find the container with id 3b6dacfaa5ff73d4ccfed28847d6144edec73a786dd1174d556eda544f8c18e3 Nov 23 14:55:42 crc kubenswrapper[4718]: I1123 14:55:42.111818 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" event={"ID":"36e9d48f-9291-4ffd-8ca3-342d488e8bc2","Type":"ContainerStarted","Data":"3b6dacfaa5ff73d4ccfed28847d6144edec73a786dd1174d556eda544f8c18e3"} Nov 23 14:55:42 crc kubenswrapper[4718]: I1123 14:55:42.112968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" event={"ID":"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9","Type":"ContainerStarted","Data":"85e6f380de02313982422b0821b465c311cab240fbffbd4903d518193ce66fcb"} Nov 23 14:55:42 crc kubenswrapper[4718]: I1123 14:55:42.114521 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55499b9ddd-52z7q" event={"ID":"33f23e32-773d-4cb1-b50d-102503baf192","Type":"ContainerStarted","Data":"3f99fcb02a729471f39256226ed82f89c556699bec3eeb3de273141b649c6b86"} Nov 23 14:55:42 crc kubenswrapper[4718]: I1123 14:55:42.114547 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55499b9ddd-52z7q" event={"ID":"33f23e32-773d-4cb1-b50d-102503baf192","Type":"ContainerStarted","Data":"a63e7b9ddab27a361c9489e1808c21776668e421531a0b3faba41e8e1bc46324"} Nov 23 14:55:42 crc kubenswrapper[4718]: I1123 14:55:42.115575 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" event={"ID":"955636b8-9879-4e63-a399-6ac037c1fcd5","Type":"ContainerStarted","Data":"9bb07442013bd9b2f367e7a8b6727d4f9c3f67480ab966bc0323a0ab4366319d"} Nov 23 14:55:42 crc kubenswrapper[4718]: I1123 14:55:42.144809 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55499b9ddd-52z7q" podStartSLOduration=2.144787985 podStartE2EDuration="2.144787985s" podCreationTimestamp="2025-11-23 14:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:55:42.140683753 +0000 UTC m=+593.380303607" watchObservedRunningTime="2025-11-23 14:55:42.144787985 +0000 UTC m=+593.384407829" Nov 23 14:55:44 crc kubenswrapper[4718]: I1123 14:55:44.126658 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z9n6m" event={"ID":"a6e7f145-b630-4486-9a8d-e08d114c3f0a","Type":"ContainerStarted","Data":"223ff3048e01424cc6f3f1dfe3ddb2d2db1cda1cd8c5d1e413aae6108c510596"} Nov 23 14:55:44 crc kubenswrapper[4718]: I1123 14:55:44.127197 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:44 crc kubenswrapper[4718]: I1123 14:55:44.128281 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" event={"ID":"955636b8-9879-4e63-a399-6ac037c1fcd5","Type":"ContainerStarted","Data":"a6250cd2da85b4bb2799645522a82a69c2407e7fa291e0768a2695fe9f7671be"} Nov 23 14:55:44 crc kubenswrapper[4718]: I1123 14:55:44.130117 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" event={"ID":"21067045-a7f2-4d80-a1fe-d2c15d3b7ee9","Type":"ContainerStarted","Data":"079d1409e44abf2dfc70c7c58f5d6e8dfab746bbad7145b8563ca45ccd7b81ad"} Nov 23 14:55:44 crc kubenswrapper[4718]: I1123 14:55:44.130312 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:55:44 crc kubenswrapper[4718]: I1123 14:55:44.142532 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-z9n6m" podStartSLOduration=1.641066168 podStartE2EDuration="4.142516838s" podCreationTimestamp="2025-11-23 14:55:40 +0000 UTC" firstStartedPulling="2025-11-23 14:55:40.761971158 +0000 UTC m=+592.001591002" lastFinishedPulling="2025-11-23 14:55:43.263421828 +0000 UTC m=+594.503041672" observedRunningTime="2025-11-23 14:55:44.141072589 +0000 UTC m=+595.380692433" watchObservedRunningTime="2025-11-23 14:55:44.142516838 +0000 UTC m=+595.382136682" Nov 23 14:55:44 crc kubenswrapper[4718]: I1123 14:55:44.159375 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" podStartSLOduration=2.4422726 podStartE2EDuration="4.159355548s" podCreationTimestamp="2025-11-23 14:55:40 +0000 UTC" firstStartedPulling="2025-11-23 14:55:41.514727427 +0000 UTC m=+592.754347271" lastFinishedPulling="2025-11-23 14:55:43.231810365 +0000 UTC m=+594.471430219" observedRunningTime="2025-11-23 14:55:44.157997291 +0000 UTC m=+595.397617145" watchObservedRunningTime="2025-11-23 14:55:44.159355548 +0000 UTC m=+595.398975392" Nov 23 14:55:45 crc kubenswrapper[4718]: I1123 14:55:45.138099 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" event={"ID":"36e9d48f-9291-4ffd-8ca3-342d488e8bc2","Type":"ContainerStarted","Data":"a3d246f5ffba542424fd3c8b4ad81184d95595d57aef1b5c4f79f163eba8565d"} Nov 23 14:55:45 crc kubenswrapper[4718]: I1123 14:55:45.165094 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-lvlqg" podStartSLOduration=2.574314968 podStartE2EDuration="5.165071567s" podCreationTimestamp="2025-11-23 14:55:40 +0000 UTC" firstStartedPulling="2025-11-23 14:55:41.612164409 +0000 UTC m=+592.851784253" lastFinishedPulling="2025-11-23 14:55:44.202921008 +0000 UTC m=+595.442540852" observedRunningTime="2025-11-23 14:55:45.15238918 +0000 UTC m=+596.392009024" watchObservedRunningTime="2025-11-23 14:55:45.165071567 +0000 UTC m=+596.404691401" Nov 23 14:55:46 crc kubenswrapper[4718]: I1123 14:55:46.144828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" event={"ID":"955636b8-9879-4e63-a399-6ac037c1fcd5","Type":"ContainerStarted","Data":"35bff031450abac495a211e726002f86485751c6a2115733ada628a654c54394"} Nov 23 14:55:46 crc kubenswrapper[4718]: I1123 14:55:46.172991 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-65tj8" podStartSLOduration=1.806742504 podStartE2EDuration="6.172966715s" podCreationTimestamp="2025-11-23 14:55:40 +0000 UTC" firstStartedPulling="2025-11-23 14:55:41.17504103 +0000 UTC m=+592.414660874" lastFinishedPulling="2025-11-23 14:55:45.541265231 +0000 UTC m=+596.780885085" observedRunningTime="2025-11-23 14:55:46.171472534 +0000 UTC m=+597.411092388" watchObservedRunningTime="2025-11-23 14:55:46.172966715 +0000 UTC m=+597.412586559" Nov 23 14:55:50 crc kubenswrapper[4718]: I1123 14:55:50.729702 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-z9n6m" Nov 23 14:55:51 crc kubenswrapper[4718]: I1123 14:55:51.080191 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:51 crc kubenswrapper[4718]: I1123 14:55:51.080255 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:51 crc kubenswrapper[4718]: I1123 14:55:51.086913 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:51 crc kubenswrapper[4718]: I1123 14:55:51.175465 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55499b9ddd-52z7q" Nov 23 14:55:51 crc kubenswrapper[4718]: I1123 14:55:51.224531 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bzr6j"] Nov 23 14:56:01 crc kubenswrapper[4718]: I1123 14:56:01.324738 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-t5mgc" Nov 23 14:56:14 crc kubenswrapper[4718]: I1123 14:56:14.873973 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx"] Nov 23 14:56:14 crc kubenswrapper[4718]: I1123 14:56:14.875698 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:14 crc kubenswrapper[4718]: I1123 14:56:14.877462 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 14:56:14 crc kubenswrapper[4718]: I1123 14:56:14.890648 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx"] Nov 23 14:56:14 crc kubenswrapper[4718]: I1123 14:56:14.963937 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2q8\" (UniqueName: \"kubernetes.io/projected/756bc291-abf5-4395-8e55-5140aae72299-kube-api-access-vf2q8\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:14 crc kubenswrapper[4718]: I1123 14:56:14.964008 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:14 crc kubenswrapper[4718]: I1123 14:56:14.964065 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.064876 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2q8\" (UniqueName: \"kubernetes.io/projected/756bc291-abf5-4395-8e55-5140aae72299-kube-api-access-vf2q8\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.064936 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.064976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.065429 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.065633 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.084362 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2q8\" (UniqueName: \"kubernetes.io/projected/756bc291-abf5-4395-8e55-5140aae72299-kube-api-access-vf2q8\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.191974 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:15 crc kubenswrapper[4718]: I1123 14:56:15.396776 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx"] Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.274974 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bzr6j" podUID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" containerName="console" containerID="cri-o://dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4" gracePeriod=15 Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.323672 4718 generic.go:334] "Generic (PLEG): container finished" podID="756bc291-abf5-4395-8e55-5140aae72299" containerID="70f3e4b568b63e3d9fdfb184e35222ed142c757bb5ad8a77d6e6149671943c97" exitCode=0 Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.323725 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" event={"ID":"756bc291-abf5-4395-8e55-5140aae72299","Type":"ContainerDied","Data":"70f3e4b568b63e3d9fdfb184e35222ed142c757bb5ad8a77d6e6149671943c97"} Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.323755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" event={"ID":"756bc291-abf5-4395-8e55-5140aae72299","Type":"ContainerStarted","Data":"8f6429b97e988863eb6f5f20b23c16fe9a45a97be8b0b787046b2bccc9773438"} Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.614522 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bzr6j_0e08255e-d787-4ed2-a984-3f4bd8a8e2d2/console/0.log" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.615058 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.685016 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-config\") pod \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.685103 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-serving-cert\") pod \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.685150 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv4th\" (UniqueName: \"kubernetes.io/projected/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-kube-api-access-gv4th\") pod \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.685191 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-trusted-ca-bundle\") pod \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.685213 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-oauth-config\") pod \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.685230 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-oauth-serving-cert\") pod \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.685255 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-service-ca\") pod \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\" (UID: \"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2\") " Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.687173 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" (UID: "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.687216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" (UID: "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.687188 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-config" (OuterVolumeSpecName: "console-config") pod "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" (UID: "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.687265 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-service-ca" (OuterVolumeSpecName: "service-ca") pod "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" (UID: "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.691254 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" (UID: "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.694673 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-kube-api-access-gv4th" (OuterVolumeSpecName: "kube-api-access-gv4th") pod "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" (UID: "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2"). InnerVolumeSpecName "kube-api-access-gv4th". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.694715 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" (UID: "0e08255e-d787-4ed2-a984-3f4bd8a8e2d2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.786482 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv4th\" (UniqueName: \"kubernetes.io/projected/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-kube-api-access-gv4th\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.786528 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.786542 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.786554 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.786565 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.786576 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:16 crc kubenswrapper[4718]: I1123 14:56:16.786591 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.332071 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bzr6j_0e08255e-d787-4ed2-a984-3f4bd8a8e2d2/console/0.log" Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.332126 4718 generic.go:334] "Generic (PLEG): container finished" podID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" containerID="dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4" exitCode=2 Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.332161 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bzr6j" event={"ID":"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2","Type":"ContainerDied","Data":"dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4"} Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.332197 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bzr6j" event={"ID":"0e08255e-d787-4ed2-a984-3f4bd8a8e2d2","Type":"ContainerDied","Data":"9bec6fed55e9a7c30375f4aa36daa7fe898bc21549bfab0c3cbe97c54c1f6db7"} Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.332216 4718 scope.go:117] "RemoveContainer" containerID="dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4" Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.332210 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bzr6j" Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.352130 4718 scope.go:117] "RemoveContainer" containerID="dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4" Nov 23 14:56:17 crc kubenswrapper[4718]: E1123 14:56:17.352669 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4\": container with ID starting with dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4 not found: ID does not exist" containerID="dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4" Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.352720 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4"} err="failed to get container status \"dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4\": rpc error: code = NotFound desc = could not find container \"dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4\": container with ID starting with dc0a958178c5f4c0db1ef196339f089d352d567f4af62785107a05cd73955ec4 not found: ID does not exist" Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.370670 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bzr6j"] Nov 23 14:56:17 crc kubenswrapper[4718]: I1123 14:56:17.377717 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bzr6j"] Nov 23 14:56:18 crc kubenswrapper[4718]: I1123 14:56:18.344002 4718 generic.go:334] "Generic (PLEG): container finished" podID="756bc291-abf5-4395-8e55-5140aae72299" containerID="3e213b287af47cccd4ba946b20f11e80f06b6f7229c1567dac2c327a49efd3d4" exitCode=0 Nov 23 14:56:18 crc kubenswrapper[4718]: I1123 14:56:18.344252 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" event={"ID":"756bc291-abf5-4395-8e55-5140aae72299","Type":"ContainerDied","Data":"3e213b287af47cccd4ba946b20f11e80f06b6f7229c1567dac2c327a49efd3d4"} Nov 23 14:56:18 crc kubenswrapper[4718]: I1123 14:56:18.450479 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" path="/var/lib/kubelet/pods/0e08255e-d787-4ed2-a984-3f4bd8a8e2d2/volumes" Nov 23 14:56:19 crc kubenswrapper[4718]: I1123 14:56:19.353533 4718 generic.go:334] "Generic (PLEG): container finished" podID="756bc291-abf5-4395-8e55-5140aae72299" containerID="e4fc1ecea80efbbe63d0a364a597b7712e03024db40b3a120ae60fa601089d50" exitCode=0 Nov 23 14:56:19 crc kubenswrapper[4718]: I1123 14:56:19.353574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" event={"ID":"756bc291-abf5-4395-8e55-5140aae72299","Type":"ContainerDied","Data":"e4fc1ecea80efbbe63d0a364a597b7712e03024db40b3a120ae60fa601089d50"} Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.633998 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.735067 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf2q8\" (UniqueName: \"kubernetes.io/projected/756bc291-abf5-4395-8e55-5140aae72299-kube-api-access-vf2q8\") pod \"756bc291-abf5-4395-8e55-5140aae72299\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.735164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-util\") pod \"756bc291-abf5-4395-8e55-5140aae72299\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.735193 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-bundle\") pod \"756bc291-abf5-4395-8e55-5140aae72299\" (UID: \"756bc291-abf5-4395-8e55-5140aae72299\") " Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.736756 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-bundle" (OuterVolumeSpecName: "bundle") pod "756bc291-abf5-4395-8e55-5140aae72299" (UID: "756bc291-abf5-4395-8e55-5140aae72299"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.740999 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756bc291-abf5-4395-8e55-5140aae72299-kube-api-access-vf2q8" (OuterVolumeSpecName: "kube-api-access-vf2q8") pod "756bc291-abf5-4395-8e55-5140aae72299" (UID: "756bc291-abf5-4395-8e55-5140aae72299"). InnerVolumeSpecName "kube-api-access-vf2q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.750451 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-util" (OuterVolumeSpecName: "util") pod "756bc291-abf5-4395-8e55-5140aae72299" (UID: "756bc291-abf5-4395-8e55-5140aae72299"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.836612 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-util\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.836654 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/756bc291-abf5-4395-8e55-5140aae72299-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:20 crc kubenswrapper[4718]: I1123 14:56:20.836667 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf2q8\" (UniqueName: \"kubernetes.io/projected/756bc291-abf5-4395-8e55-5140aae72299-kube-api-access-vf2q8\") on node \"crc\" DevicePath \"\"" Nov 23 14:56:21 crc kubenswrapper[4718]: I1123 14:56:21.366468 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" event={"ID":"756bc291-abf5-4395-8e55-5140aae72299","Type":"ContainerDied","Data":"8f6429b97e988863eb6f5f20b23c16fe9a45a97be8b0b787046b2bccc9773438"} Nov 23 14:56:21 crc kubenswrapper[4718]: I1123 14:56:21.366540 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f6429b97e988863eb6f5f20b23c16fe9a45a97be8b0b787046b2bccc9773438" Nov 23 14:56:21 crc kubenswrapper[4718]: I1123 14:56:21.367136 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.478159 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj"] Nov 23 14:56:35 crc kubenswrapper[4718]: E1123 14:56:35.479665 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756bc291-abf5-4395-8e55-5140aae72299" containerName="pull" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.479744 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="756bc291-abf5-4395-8e55-5140aae72299" containerName="pull" Nov 23 14:56:35 crc kubenswrapper[4718]: E1123 14:56:35.479807 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756bc291-abf5-4395-8e55-5140aae72299" containerName="util" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.479857 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="756bc291-abf5-4395-8e55-5140aae72299" containerName="util" Nov 23 14:56:35 crc kubenswrapper[4718]: E1123 14:56:35.479908 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" containerName="console" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.479956 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" containerName="console" Nov 23 14:56:35 crc kubenswrapper[4718]: E1123 14:56:35.480208 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756bc291-abf5-4395-8e55-5140aae72299" containerName="extract" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.480257 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="756bc291-abf5-4395-8e55-5140aae72299" containerName="extract" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.480399 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e08255e-d787-4ed2-a984-3f4bd8a8e2d2" containerName="console" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.480482 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="756bc291-abf5-4395-8e55-5140aae72299" containerName="extract" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.480906 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.482809 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6nx9h" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.483784 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.484298 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.484604 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.485153 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.504200 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj"] Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.639834 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44722e76-1f31-46d4-b765-abd86f655b27-apiservice-cert\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.639882 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44722e76-1f31-46d4-b765-abd86f655b27-webhook-cert\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.639906 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgvb7\" (UniqueName: \"kubernetes.io/projected/44722e76-1f31-46d4-b765-abd86f655b27-kube-api-access-fgvb7\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.741327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44722e76-1f31-46d4-b765-abd86f655b27-apiservice-cert\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.741382 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44722e76-1f31-46d4-b765-abd86f655b27-webhook-cert\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.741406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgvb7\" (UniqueName: \"kubernetes.io/projected/44722e76-1f31-46d4-b765-abd86f655b27-kube-api-access-fgvb7\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.750153 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44722e76-1f31-46d4-b765-abd86f655b27-webhook-cert\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.751944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44722e76-1f31-46d4-b765-abd86f655b27-apiservice-cert\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.787093 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgvb7\" (UniqueName: \"kubernetes.io/projected/44722e76-1f31-46d4-b765-abd86f655b27-kube-api-access-fgvb7\") pod \"metallb-operator-controller-manager-768fb95d78-mp5lj\" (UID: \"44722e76-1f31-46d4-b765-abd86f655b27\") " pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.799089 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.799710 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67"] Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.800555 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.814992 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.815202 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.823883 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2c9g6" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.860543 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67"] Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.944475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsfxw\" (UniqueName: \"kubernetes.io/projected/0e181755-dfbb-4608-b061-cbb0e95d6f95-kube-api-access-rsfxw\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.944549 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e181755-dfbb-4608-b061-cbb0e95d6f95-apiservice-cert\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:35 crc kubenswrapper[4718]: I1123 14:56:35.944604 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e181755-dfbb-4608-b061-cbb0e95d6f95-webhook-cert\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.045766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsfxw\" (UniqueName: \"kubernetes.io/projected/0e181755-dfbb-4608-b061-cbb0e95d6f95-kube-api-access-rsfxw\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.045831 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e181755-dfbb-4608-b061-cbb0e95d6f95-apiservice-cert\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.045867 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e181755-dfbb-4608-b061-cbb0e95d6f95-webhook-cert\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.052451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e181755-dfbb-4608-b061-cbb0e95d6f95-apiservice-cert\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.052771 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e181755-dfbb-4608-b061-cbb0e95d6f95-webhook-cert\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.066636 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsfxw\" (UniqueName: \"kubernetes.io/projected/0e181755-dfbb-4608-b061-cbb0e95d6f95-kube-api-access-rsfxw\") pod \"metallb-operator-webhook-server-7bf7789474-dgc67\" (UID: \"0e181755-dfbb-4608-b061-cbb0e95d6f95\") " pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.086414 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj"] Nov 23 14:56:36 crc kubenswrapper[4718]: W1123 14:56:36.093866 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44722e76_1f31_46d4_b765_abd86f655b27.slice/crio-509efce28d35025ec055d96a85399fb35323af51e4605a0637ad6206f226529d WatchSource:0}: Error finding container 509efce28d35025ec055d96a85399fb35323af51e4605a0637ad6206f226529d: Status 404 returned error can't find the container with id 509efce28d35025ec055d96a85399fb35323af51e4605a0637ad6206f226529d Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.182976 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.448626 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" event={"ID":"44722e76-1f31-46d4-b765-abd86f655b27","Type":"ContainerStarted","Data":"509efce28d35025ec055d96a85399fb35323af51e4605a0637ad6206f226529d"} Nov 23 14:56:36 crc kubenswrapper[4718]: I1123 14:56:36.590719 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67"] Nov 23 14:56:36 crc kubenswrapper[4718]: W1123 14:56:36.598092 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e181755_dfbb_4608_b061_cbb0e95d6f95.slice/crio-fc5478acf24d87effe7bdded05558216a70fde09ac0a557ab41f37e8c2f131f4 WatchSource:0}: Error finding container fc5478acf24d87effe7bdded05558216a70fde09ac0a557ab41f37e8c2f131f4: Status 404 returned error can't find the container with id fc5478acf24d87effe7bdded05558216a70fde09ac0a557ab41f37e8c2f131f4 Nov 23 14:56:37 crc kubenswrapper[4718]: I1123 14:56:37.455621 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" event={"ID":"0e181755-dfbb-4608-b061-cbb0e95d6f95","Type":"ContainerStarted","Data":"fc5478acf24d87effe7bdded05558216a70fde09ac0a557ab41f37e8c2f131f4"} Nov 23 14:56:39 crc kubenswrapper[4718]: I1123 14:56:39.470625 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" event={"ID":"44722e76-1f31-46d4-b765-abd86f655b27","Type":"ContainerStarted","Data":"8b546dc7cb06d49811a0d5828d6e5ae3d571ca682471ae6371a9ca7058b2276c"} Nov 23 14:56:39 crc kubenswrapper[4718]: I1123 14:56:39.470952 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:56:39 crc kubenswrapper[4718]: I1123 14:56:39.500746 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" podStartSLOduration=1.585834076 podStartE2EDuration="4.500731984s" podCreationTimestamp="2025-11-23 14:56:35 +0000 UTC" firstStartedPulling="2025-11-23 14:56:36.097292966 +0000 UTC m=+647.336912800" lastFinishedPulling="2025-11-23 14:56:39.012190844 +0000 UTC m=+650.251810708" observedRunningTime="2025-11-23 14:56:39.496811935 +0000 UTC m=+650.736431779" watchObservedRunningTime="2025-11-23 14:56:39.500731984 +0000 UTC m=+650.740351828" Nov 23 14:56:43 crc kubenswrapper[4718]: I1123 14:56:43.495911 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" event={"ID":"0e181755-dfbb-4608-b061-cbb0e95d6f95","Type":"ContainerStarted","Data":"d06fd5e8a892d0f3d72af811af3cf8b6c38533559cfbb150b504c19a6d75db14"} Nov 23 14:56:43 crc kubenswrapper[4718]: I1123 14:56:43.496416 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:56:43 crc kubenswrapper[4718]: I1123 14:56:43.514649 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" podStartSLOduration=2.248332268 podStartE2EDuration="8.51463663s" podCreationTimestamp="2025-11-23 14:56:35 +0000 UTC" firstStartedPulling="2025-11-23 14:56:36.602353285 +0000 UTC m=+647.841973129" lastFinishedPulling="2025-11-23 14:56:42.868657647 +0000 UTC m=+654.108277491" observedRunningTime="2025-11-23 14:56:43.512075889 +0000 UTC m=+654.751695733" watchObservedRunningTime="2025-11-23 14:56:43.51463663 +0000 UTC m=+654.754256474" Nov 23 14:56:56 crc kubenswrapper[4718]: I1123 14:56:56.189245 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7bf7789474-dgc67" Nov 23 14:57:15 crc kubenswrapper[4718]: I1123 14:57:15.802300 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-768fb95d78-mp5lj" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.511188 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-q5hrx"] Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.513272 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.515977 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-h8lsp" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.516942 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.517898 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.518892 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-z66k9"] Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.522380 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.523210 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-z66k9"] Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.524167 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.566078 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-reloader\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.566389 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvvc\" (UniqueName: \"kubernetes.io/projected/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-kube-api-access-svvvc\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.566531 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-sockets\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.566643 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-metrics\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.566746 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-conf\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.566876 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-startup\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.566986 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-metrics-certs\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.567096 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcqk4\" (UniqueName: \"kubernetes.io/projected/5cd190ff-29e1-4d41-9687-57c554820cb4-kube-api-access-jcqk4\") pod \"frr-k8s-webhook-server-6998585d5-z66k9\" (UID: \"5cd190ff-29e1-4d41-9687-57c554820cb4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.567215 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cd190ff-29e1-4d41-9687-57c554820cb4-cert\") pod \"frr-k8s-webhook-server-6998585d5-z66k9\" (UID: \"5cd190ff-29e1-4d41-9687-57c554820cb4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.605283 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-65gj6"] Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.606100 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.608458 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-jlt48" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.609956 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.610295 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.610558 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.635709 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-n4n8k"] Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.636788 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.639257 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.662274 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-n4n8k"] Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.667939 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-cert\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.668674 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-conf\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669164 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-startup\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669282 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metrics-certs\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669363 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkm5w\" (UniqueName: \"kubernetes.io/projected/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-kube-api-access-lkm5w\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-metrics-certs\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669591 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcqk4\" (UniqueName: \"kubernetes.io/projected/5cd190ff-29e1-4d41-9687-57c554820cb4-kube-api-access-jcqk4\") pod \"frr-k8s-webhook-server-6998585d5-z66k9\" (UID: \"5cd190ff-29e1-4d41-9687-57c554820cb4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-metrics-certs\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669774 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cd190ff-29e1-4d41-9687-57c554820cb4-cert\") pod \"frr-k8s-webhook-server-6998585d5-z66k9\" (UID: \"5cd190ff-29e1-4d41-9687-57c554820cb4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669860 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-reloader\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669973 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvvc\" (UniqueName: \"kubernetes.io/projected/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-kube-api-access-svvvc\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.670058 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-sockets\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.670142 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.670231 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-metrics\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.670309 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9sg\" (UniqueName: \"kubernetes.io/projected/28328d3c-1a5b-45f9-a606-9f604db00a0a-kube-api-access-bl9sg\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.670404 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metallb-excludel2\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.669130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-conf\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.671507 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-startup\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.672959 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-frr-sockets\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.672961 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-reloader\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.673047 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-metrics\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.688065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-metrics-certs\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.699249 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cd190ff-29e1-4d41-9687-57c554820cb4-cert\") pod \"frr-k8s-webhook-server-6998585d5-z66k9\" (UID: \"5cd190ff-29e1-4d41-9687-57c554820cb4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.701006 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvvc\" (UniqueName: \"kubernetes.io/projected/ced693f2-68ad-4fd1-ab33-c03e8d9fcc56-kube-api-access-svvvc\") pod \"frr-k8s-q5hrx\" (UID: \"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56\") " pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.714879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcqk4\" (UniqueName: \"kubernetes.io/projected/5cd190ff-29e1-4d41-9687-57c554820cb4-kube-api-access-jcqk4\") pod \"frr-k8s-webhook-server-6998585d5-z66k9\" (UID: \"5cd190ff-29e1-4d41-9687-57c554820cb4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.771226 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.771575 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9sg\" (UniqueName: \"kubernetes.io/projected/28328d3c-1a5b-45f9-a606-9f604db00a0a-kube-api-access-bl9sg\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.771599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metallb-excludel2\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.771615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-cert\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.771652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metrics-certs\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.771672 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkm5w\" (UniqueName: \"kubernetes.io/projected/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-kube-api-access-lkm5w\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.771695 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-metrics-certs\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: E1123 14:57:16.771506 4718 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 14:57:16 crc kubenswrapper[4718]: E1123 14:57:16.771965 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist podName:e56ba209-dfa7-4f05-b9fd-e156af86cd9f nodeName:}" failed. No retries permitted until 2025-11-23 14:57:17.271946879 +0000 UTC m=+688.511566723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist") pod "speaker-65gj6" (UID: "e56ba209-dfa7-4f05-b9fd-e156af86cd9f") : secret "metallb-memberlist" not found Nov 23 14:57:16 crc kubenswrapper[4718]: E1123 14:57:16.771831 4718 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 23 14:57:16 crc kubenswrapper[4718]: E1123 14:57:16.772002 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-metrics-certs podName:28328d3c-1a5b-45f9-a606-9f604db00a0a nodeName:}" failed. No retries permitted until 2025-11-23 14:57:17.27199621 +0000 UTC m=+688.511616054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-metrics-certs") pod "controller-6c7b4b5f48-n4n8k" (UID: "28328d3c-1a5b-45f9-a606-9f604db00a0a") : secret "controller-certs-secret" not found Nov 23 14:57:16 crc kubenswrapper[4718]: E1123 14:57:16.772150 4718 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 23 14:57:16 crc kubenswrapper[4718]: E1123 14:57:16.772179 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metrics-certs podName:e56ba209-dfa7-4f05-b9fd-e156af86cd9f nodeName:}" failed. No retries permitted until 2025-11-23 14:57:17.272171955 +0000 UTC m=+688.511791799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metrics-certs") pod "speaker-65gj6" (UID: "e56ba209-dfa7-4f05-b9fd-e156af86cd9f") : secret "speaker-certs-secret" not found Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.773265 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metallb-excludel2\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.774119 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.787987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-cert\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.792260 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9sg\" (UniqueName: \"kubernetes.io/projected/28328d3c-1a5b-45f9-a606-9f604db00a0a-kube-api-access-bl9sg\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.792855 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkm5w\" (UniqueName: \"kubernetes.io/projected/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-kube-api-access-lkm5w\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.837082 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:16 crc kubenswrapper[4718]: I1123 14:57:16.849883 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.256826 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-z66k9"] Nov 23 14:57:17 crc kubenswrapper[4718]: W1123 14:57:17.262491 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd190ff_29e1_4d41_9687_57c554820cb4.slice/crio-db0e7371ca3e171151d4eb13c3c805a47467f458846428a69cb8ea88a3424ab1 WatchSource:0}: Error finding container db0e7371ca3e171151d4eb13c3c805a47467f458846428a69cb8ea88a3424ab1: Status 404 returned error can't find the container with id db0e7371ca3e171151d4eb13c3c805a47467f458846428a69cb8ea88a3424ab1 Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.277413 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metrics-certs\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.277540 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-metrics-certs\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.277617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:17 crc kubenswrapper[4718]: E1123 14:57:17.277827 4718 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 14:57:17 crc kubenswrapper[4718]: E1123 14:57:17.277890 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist podName:e56ba209-dfa7-4f05-b9fd-e156af86cd9f nodeName:}" failed. No retries permitted until 2025-11-23 14:57:18.277871921 +0000 UTC m=+689.517491775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist") pod "speaker-65gj6" (UID: "e56ba209-dfa7-4f05-b9fd-e156af86cd9f") : secret "metallb-memberlist" not found Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.283887 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-metrics-certs\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.284251 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28328d3c-1a5b-45f9-a606-9f604db00a0a-metrics-certs\") pod \"controller-6c7b4b5f48-n4n8k\" (UID: \"28328d3c-1a5b-45f9-a606-9f604db00a0a\") " pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.559233 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.685298 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerStarted","Data":"f8b6f4fc1e9bd0d23a63477b9ecb64a75772b7a3b420ecb06e64a3e423172d99"} Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.686452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" event={"ID":"5cd190ff-29e1-4d41-9687-57c554820cb4","Type":"ContainerStarted","Data":"db0e7371ca3e171151d4eb13c3c805a47467f458846428a69cb8ea88a3424ab1"} Nov 23 14:57:17 crc kubenswrapper[4718]: I1123 14:57:17.799266 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-n4n8k"] Nov 23 14:57:17 crc kubenswrapper[4718]: W1123 14:57:17.810149 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28328d3c_1a5b_45f9_a606_9f604db00a0a.slice/crio-8dfd8079479d357fe21494d73c5a0f50a2bb8494f9464b3756c4b0a7b813b52b WatchSource:0}: Error finding container 8dfd8079479d357fe21494d73c5a0f50a2bb8494f9464b3756c4b0a7b813b52b: Status 404 returned error can't find the container with id 8dfd8079479d357fe21494d73c5a0f50a2bb8494f9464b3756c4b0a7b813b52b Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.291170 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.298169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e56ba209-dfa7-4f05-b9fd-e156af86cd9f-memberlist\") pod \"speaker-65gj6\" (UID: \"e56ba209-dfa7-4f05-b9fd-e156af86cd9f\") " pod="metallb-system/speaker-65gj6" Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.421547 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-65gj6" Nov 23 14:57:18 crc kubenswrapper[4718]: W1123 14:57:18.442729 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode56ba209_dfa7_4f05_b9fd_e156af86cd9f.slice/crio-cf655d72e4d3064cb32923debdba5fe7bb1ce855406b744225d149c70b920ad8 WatchSource:0}: Error finding container cf655d72e4d3064cb32923debdba5fe7bb1ce855406b744225d149c70b920ad8: Status 404 returned error can't find the container with id cf655d72e4d3064cb32923debdba5fe7bb1ce855406b744225d149c70b920ad8 Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.696093 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-n4n8k" event={"ID":"28328d3c-1a5b-45f9-a606-9f604db00a0a","Type":"ContainerStarted","Data":"8c10a59db21284bd387930ca5154cbe31306f73cbdfefab60c987f1482b5bd91"} Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.696466 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-n4n8k" event={"ID":"28328d3c-1a5b-45f9-a606-9f604db00a0a","Type":"ContainerStarted","Data":"e990a3c65419f22186610026aa1791ad9bf5c319d790d375519c9ad762245286"} Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.696503 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.696522 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-n4n8k" event={"ID":"28328d3c-1a5b-45f9-a606-9f604db00a0a","Type":"ContainerStarted","Data":"8dfd8079479d357fe21494d73c5a0f50a2bb8494f9464b3756c4b0a7b813b52b"} Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.701886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-65gj6" event={"ID":"e56ba209-dfa7-4f05-b9fd-e156af86cd9f","Type":"ContainerStarted","Data":"26963bb098fdaf5e35134c04c445b7bc66a962b73315b97793a51b78faf7bb04"} Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.701941 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-65gj6" event={"ID":"e56ba209-dfa7-4f05-b9fd-e156af86cd9f","Type":"ContainerStarted","Data":"cf655d72e4d3064cb32923debdba5fe7bb1ce855406b744225d149c70b920ad8"} Nov 23 14:57:18 crc kubenswrapper[4718]: I1123 14:57:18.721250 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-n4n8k" podStartSLOduration=2.721225293 podStartE2EDuration="2.721225293s" podCreationTimestamp="2025-11-23 14:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:57:18.715478164 +0000 UTC m=+689.955098028" watchObservedRunningTime="2025-11-23 14:57:18.721225293 +0000 UTC m=+689.960845137" Nov 23 14:57:19 crc kubenswrapper[4718]: I1123 14:57:19.708351 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-65gj6" event={"ID":"e56ba209-dfa7-4f05-b9fd-e156af86cd9f","Type":"ContainerStarted","Data":"5680bd8acf45da610baec913309ef13da74ae68a294b4e8038f5257d8615a45c"} Nov 23 14:57:19 crc kubenswrapper[4718]: I1123 14:57:19.735691 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-65gj6" podStartSLOduration=3.735675659 podStartE2EDuration="3.735675659s" podCreationTimestamp="2025-11-23 14:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:57:19.734867857 +0000 UTC m=+690.974487701" watchObservedRunningTime="2025-11-23 14:57:19.735675659 +0000 UTC m=+690.975295493" Nov 23 14:57:20 crc kubenswrapper[4718]: I1123 14:57:20.714191 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-65gj6" Nov 23 14:57:23 crc kubenswrapper[4718]: I1123 14:57:23.053286 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:57:23 crc kubenswrapper[4718]: I1123 14:57:23.053660 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:57:24 crc kubenswrapper[4718]: I1123 14:57:24.755406 4718 generic.go:334] "Generic (PLEG): container finished" podID="ced693f2-68ad-4fd1-ab33-c03e8d9fcc56" containerID="d39b96d04c187cee774cfba6dc53a2cbd94db552888e807827f1df34afd7ae4b" exitCode=0 Nov 23 14:57:24 crc kubenswrapper[4718]: I1123 14:57:24.755486 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerDied","Data":"d39b96d04c187cee774cfba6dc53a2cbd94db552888e807827f1df34afd7ae4b"} Nov 23 14:57:24 crc kubenswrapper[4718]: I1123 14:57:24.761480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" event={"ID":"5cd190ff-29e1-4d41-9687-57c554820cb4","Type":"ContainerStarted","Data":"86728df8a0534dc8295751bdebb766c5126b4019dfed2c0c4da1b2ac9ce53861"} Nov 23 14:57:24 crc kubenswrapper[4718]: I1123 14:57:24.761643 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:24 crc kubenswrapper[4718]: I1123 14:57:24.812494 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" podStartSLOduration=2.164646578 podStartE2EDuration="8.812474805s" podCreationTimestamp="2025-11-23 14:57:16 +0000 UTC" firstStartedPulling="2025-11-23 14:57:17.265509238 +0000 UTC m=+688.505129082" lastFinishedPulling="2025-11-23 14:57:23.913337455 +0000 UTC m=+695.152957309" observedRunningTime="2025-11-23 14:57:24.810119749 +0000 UTC m=+696.049739593" watchObservedRunningTime="2025-11-23 14:57:24.812474805 +0000 UTC m=+696.052094649" Nov 23 14:57:25 crc kubenswrapper[4718]: I1123 14:57:25.769711 4718 generic.go:334] "Generic (PLEG): container finished" podID="ced693f2-68ad-4fd1-ab33-c03e8d9fcc56" containerID="6b2a3ea4847b16bfcd23f108a364b342340e5fc366644bb2821378b247ff56b2" exitCode=0 Nov 23 14:57:25 crc kubenswrapper[4718]: I1123 14:57:25.769773 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerDied","Data":"6b2a3ea4847b16bfcd23f108a364b342340e5fc366644bb2821378b247ff56b2"} Nov 23 14:57:26 crc kubenswrapper[4718]: I1123 14:57:26.780671 4718 generic.go:334] "Generic (PLEG): container finished" podID="ced693f2-68ad-4fd1-ab33-c03e8d9fcc56" containerID="4c7cff96fa22c2ce55ac3143c27c1a610e6844af0b6344dcca74f6306175c36b" exitCode=0 Nov 23 14:57:26 crc kubenswrapper[4718]: I1123 14:57:26.780762 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerDied","Data":"4c7cff96fa22c2ce55ac3143c27c1a610e6844af0b6344dcca74f6306175c36b"} Nov 23 14:57:27 crc kubenswrapper[4718]: I1123 14:57:27.566278 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-n4n8k" Nov 23 14:57:27 crc kubenswrapper[4718]: I1123 14:57:27.801841 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerStarted","Data":"69c0a80bd3bee065d8e9be6cdb48fe3d3f293bbedd0746527703190c7132f347"} Nov 23 14:57:27 crc kubenswrapper[4718]: I1123 14:57:27.801894 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerStarted","Data":"4ca58452e84c44ebf9a38c441a3684b55d73b1f60a246ebe95b05636b21ec5b2"} Nov 23 14:57:27 crc kubenswrapper[4718]: I1123 14:57:27.801950 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerStarted","Data":"6e2f2b48beae1301d55a409d42cde49f7d8f39c1f4aacdabd2a4dd6c62081071"} Nov 23 14:57:27 crc kubenswrapper[4718]: I1123 14:57:27.801962 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerStarted","Data":"42dfe8ecea8311debfd6bfb40b738164a8ab03792a466888048ebcaede5c11a8"} Nov 23 14:57:27 crc kubenswrapper[4718]: I1123 14:57:27.801973 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerStarted","Data":"f5cb15321cfb997bc2887abe207586651778b6911c4a5ba91b843f29672264fe"} Nov 23 14:57:28 crc kubenswrapper[4718]: I1123 14:57:28.425213 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-65gj6" Nov 23 14:57:28 crc kubenswrapper[4718]: I1123 14:57:28.812073 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hrx" event={"ID":"ced693f2-68ad-4fd1-ab33-c03e8d9fcc56","Type":"ContainerStarted","Data":"01187098c57eb382d2a0d949b1f1e6384a402cb77a6c7ed7d385ec147cefcb4c"} Nov 23 14:57:28 crc kubenswrapper[4718]: I1123 14:57:28.812308 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:28 crc kubenswrapper[4718]: I1123 14:57:28.832615 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-q5hrx" podStartSLOduration=5.883982672 podStartE2EDuration="12.832594154s" podCreationTimestamp="2025-11-23 14:57:16 +0000 UTC" firstStartedPulling="2025-11-23 14:57:16.971677085 +0000 UTC m=+688.211296929" lastFinishedPulling="2025-11-23 14:57:23.920288557 +0000 UTC m=+695.159908411" observedRunningTime="2025-11-23 14:57:28.83139081 +0000 UTC m=+700.071010664" watchObservedRunningTime="2025-11-23 14:57:28.832594154 +0000 UTC m=+700.072213998" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.496589 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4jrvb"] Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.498183 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4jrvb" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.508037 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xdd96" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.509014 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.509306 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.512842 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4jrvb"] Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.583594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tj9\" (UniqueName: \"kubernetes.io/projected/16ae4200-81b3-4c2e-a0b7-9fb940b52aec-kube-api-access-n6tj9\") pod \"openstack-operator-index-4jrvb\" (UID: \"16ae4200-81b3-4c2e-a0b7-9fb940b52aec\") " pod="openstack-operators/openstack-operator-index-4jrvb" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.685421 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6tj9\" (UniqueName: \"kubernetes.io/projected/16ae4200-81b3-4c2e-a0b7-9fb940b52aec-kube-api-access-n6tj9\") pod \"openstack-operator-index-4jrvb\" (UID: \"16ae4200-81b3-4c2e-a0b7-9fb940b52aec\") " pod="openstack-operators/openstack-operator-index-4jrvb" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.704141 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6tj9\" (UniqueName: \"kubernetes.io/projected/16ae4200-81b3-4c2e-a0b7-9fb940b52aec-kube-api-access-n6tj9\") pod \"openstack-operator-index-4jrvb\" (UID: \"16ae4200-81b3-4c2e-a0b7-9fb940b52aec\") " pod="openstack-operators/openstack-operator-index-4jrvb" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.837350 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.837528 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4jrvb" Nov 23 14:57:31 crc kubenswrapper[4718]: I1123 14:57:31.894030 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:32 crc kubenswrapper[4718]: I1123 14:57:32.250222 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4jrvb"] Nov 23 14:57:32 crc kubenswrapper[4718]: I1123 14:57:32.844418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4jrvb" event={"ID":"16ae4200-81b3-4c2e-a0b7-9fb940b52aec","Type":"ContainerStarted","Data":"923bdbf233c1af4ae965720ea416d75e0b3ddbcbf0cc0677a17230edc1ca9a41"} Nov 23 14:57:34 crc kubenswrapper[4718]: I1123 14:57:34.851975 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4jrvb"] Nov 23 14:57:35 crc kubenswrapper[4718]: I1123 14:57:35.466977 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j47c5"] Nov 23 14:57:35 crc kubenswrapper[4718]: I1123 14:57:35.469288 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:35 crc kubenswrapper[4718]: I1123 14:57:35.470739 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j47c5"] Nov 23 14:57:35 crc kubenswrapper[4718]: I1123 14:57:35.537931 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddjmk\" (UniqueName: \"kubernetes.io/projected/6fbfe838-c27e-484d-a610-882fbb719e14-kube-api-access-ddjmk\") pod \"openstack-operator-index-j47c5\" (UID: \"6fbfe838-c27e-484d-a610-882fbb719e14\") " pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:35 crc kubenswrapper[4718]: I1123 14:57:35.639512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddjmk\" (UniqueName: \"kubernetes.io/projected/6fbfe838-c27e-484d-a610-882fbb719e14-kube-api-access-ddjmk\") pod \"openstack-operator-index-j47c5\" (UID: \"6fbfe838-c27e-484d-a610-882fbb719e14\") " pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:35 crc kubenswrapper[4718]: I1123 14:57:35.663500 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddjmk\" (UniqueName: \"kubernetes.io/projected/6fbfe838-c27e-484d-a610-882fbb719e14-kube-api-access-ddjmk\") pod \"openstack-operator-index-j47c5\" (UID: \"6fbfe838-c27e-484d-a610-882fbb719e14\") " pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:35 crc kubenswrapper[4718]: I1123 14:57:35.791691 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:36 crc kubenswrapper[4718]: I1123 14:57:36.827534 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j47c5"] Nov 23 14:57:36 crc kubenswrapper[4718]: I1123 14:57:36.840279 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-q5hrx" Nov 23 14:57:36 crc kubenswrapper[4718]: I1123 14:57:36.859950 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-z66k9" Nov 23 14:57:36 crc kubenswrapper[4718]: I1123 14:57:36.889902 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-4jrvb" podUID="16ae4200-81b3-4c2e-a0b7-9fb940b52aec" containerName="registry-server" containerID="cri-o://a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20" gracePeriod=2 Nov 23 14:57:36 crc kubenswrapper[4718]: I1123 14:57:36.890190 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4jrvb" event={"ID":"16ae4200-81b3-4c2e-a0b7-9fb940b52aec","Type":"ContainerStarted","Data":"a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20"} Nov 23 14:57:36 crc kubenswrapper[4718]: I1123 14:57:36.892923 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j47c5" event={"ID":"6fbfe838-c27e-484d-a610-882fbb719e14","Type":"ContainerStarted","Data":"db44eda24206ecd72ea91fcae40128c8e0644a1f5458c9b3451ccb0695d5e662"} Nov 23 14:57:36 crc kubenswrapper[4718]: I1123 14:57:36.917173 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4jrvb" podStartSLOduration=1.66538416 podStartE2EDuration="5.917154478s" podCreationTimestamp="2025-11-23 14:57:31 +0000 UTC" firstStartedPulling="2025-11-23 14:57:32.257993479 +0000 UTC m=+703.497613323" lastFinishedPulling="2025-11-23 14:57:36.509763777 +0000 UTC m=+707.749383641" observedRunningTime="2025-11-23 14:57:36.913036604 +0000 UTC m=+708.152656448" watchObservedRunningTime="2025-11-23 14:57:36.917154478 +0000 UTC m=+708.156774322" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.234129 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4jrvb" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.259067 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6tj9\" (UniqueName: \"kubernetes.io/projected/16ae4200-81b3-4c2e-a0b7-9fb940b52aec-kube-api-access-n6tj9\") pod \"16ae4200-81b3-4c2e-a0b7-9fb940b52aec\" (UID: \"16ae4200-81b3-4c2e-a0b7-9fb940b52aec\") " Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.264855 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ae4200-81b3-4c2e-a0b7-9fb940b52aec-kube-api-access-n6tj9" (OuterVolumeSpecName: "kube-api-access-n6tj9") pod "16ae4200-81b3-4c2e-a0b7-9fb940b52aec" (UID: "16ae4200-81b3-4c2e-a0b7-9fb940b52aec"). InnerVolumeSpecName "kube-api-access-n6tj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.361172 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6tj9\" (UniqueName: \"kubernetes.io/projected/16ae4200-81b3-4c2e-a0b7-9fb940b52aec-kube-api-access-n6tj9\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.900015 4718 generic.go:334] "Generic (PLEG): container finished" podID="16ae4200-81b3-4c2e-a0b7-9fb940b52aec" containerID="a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20" exitCode=0 Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.900066 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4jrvb" event={"ID":"16ae4200-81b3-4c2e-a0b7-9fb940b52aec","Type":"ContainerDied","Data":"a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20"} Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.900349 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4jrvb" event={"ID":"16ae4200-81b3-4c2e-a0b7-9fb940b52aec","Type":"ContainerDied","Data":"923bdbf233c1af4ae965720ea416d75e0b3ddbcbf0cc0677a17230edc1ca9a41"} Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.900372 4718 scope.go:117] "RemoveContainer" containerID="a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.900105 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4jrvb" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.903621 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j47c5" event={"ID":"6fbfe838-c27e-484d-a610-882fbb719e14","Type":"ContainerStarted","Data":"f84adc285b444095755638c635b124d20a81273aa922c2f09617aa7e2ff4b286"} Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.920482 4718 scope.go:117] "RemoveContainer" containerID="a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20" Nov 23 14:57:37 crc kubenswrapper[4718]: E1123 14:57:37.921159 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20\": container with ID starting with a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20 not found: ID does not exist" containerID="a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.921226 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20"} err="failed to get container status \"a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20\": rpc error: code = NotFound desc = could not find container \"a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20\": container with ID starting with a988256c0ebb3f3352ac8af192b1d1a26527bb67f29dcdb42b5de0841d8d4c20 not found: ID does not exist" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.926783 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j47c5" podStartSLOduration=2.841207298 podStartE2EDuration="2.926756959s" podCreationTimestamp="2025-11-23 14:57:35 +0000 UTC" firstStartedPulling="2025-11-23 14:57:36.838366534 +0000 UTC m=+708.077986388" lastFinishedPulling="2025-11-23 14:57:36.923916205 +0000 UTC m=+708.163536049" observedRunningTime="2025-11-23 14:57:37.922170643 +0000 UTC m=+709.161790487" watchObservedRunningTime="2025-11-23 14:57:37.926756959 +0000 UTC m=+709.166376843" Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.946201 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4jrvb"] Nov 23 14:57:37 crc kubenswrapper[4718]: I1123 14:57:37.950006 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-4jrvb"] Nov 23 14:57:38 crc kubenswrapper[4718]: I1123 14:57:38.448668 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ae4200-81b3-4c2e-a0b7-9fb940b52aec" path="/var/lib/kubelet/pods/16ae4200-81b3-4c2e-a0b7-9fb940b52aec/volumes" Nov 23 14:57:39 crc kubenswrapper[4718]: I1123 14:57:39.999400 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xwp8r"] Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:39.999977 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" podUID="ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" containerName="controller-manager" containerID="cri-o://22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df" gracePeriod=30 Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.093995 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb"] Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.094235 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" podUID="205d682b-7592-4b53-bcf6-0300c1084046" containerName="route-controller-manager" containerID="cri-o://423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37" gracePeriod=30 Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.416145 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.482388 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602015 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/205d682b-7592-4b53-bcf6-0300c1084046-serving-cert\") pod \"205d682b-7592-4b53-bcf6-0300c1084046\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602389 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-config\") pod \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602473 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-config\") pod \"205d682b-7592-4b53-bcf6-0300c1084046\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602506 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-proxy-ca-bundles\") pod \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602545 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-client-ca\") pod \"205d682b-7592-4b53-bcf6-0300c1084046\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602565 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-serving-cert\") pod \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602593 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk9mk\" (UniqueName: \"kubernetes.io/projected/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-kube-api-access-pk9mk\") pod \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602623 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgq8\" (UniqueName: \"kubernetes.io/projected/205d682b-7592-4b53-bcf6-0300c1084046-kube-api-access-dsgq8\") pod \"205d682b-7592-4b53-bcf6-0300c1084046\" (UID: \"205d682b-7592-4b53-bcf6-0300c1084046\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.602649 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-client-ca\") pod \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\" (UID: \"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf\") " Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.603220 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" (UID: "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.603351 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" (UID: "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.603652 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-client-ca" (OuterVolumeSpecName: "client-ca") pod "205d682b-7592-4b53-bcf6-0300c1084046" (UID: "205d682b-7592-4b53-bcf6-0300c1084046"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.603704 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-config" (OuterVolumeSpecName: "config") pod "205d682b-7592-4b53-bcf6-0300c1084046" (UID: "205d682b-7592-4b53-bcf6-0300c1084046"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.604118 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-config" (OuterVolumeSpecName: "config") pod "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" (UID: "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.608033 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" (UID: "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.608194 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205d682b-7592-4b53-bcf6-0300c1084046-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "205d682b-7592-4b53-bcf6-0300c1084046" (UID: "205d682b-7592-4b53-bcf6-0300c1084046"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.608273 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205d682b-7592-4b53-bcf6-0300c1084046-kube-api-access-dsgq8" (OuterVolumeSpecName: "kube-api-access-dsgq8") pod "205d682b-7592-4b53-bcf6-0300c1084046" (UID: "205d682b-7592-4b53-bcf6-0300c1084046"). InnerVolumeSpecName "kube-api-access-dsgq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.608474 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-kube-api-access-pk9mk" (OuterVolumeSpecName: "kube-api-access-pk9mk") pod "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" (UID: "ef388340-e800-4dbf-a7cf-6d6bd9a1eecf"). InnerVolumeSpecName "kube-api-access-pk9mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705179 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705241 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705257 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk9mk\" (UniqueName: \"kubernetes.io/projected/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-kube-api-access-pk9mk\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705271 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgq8\" (UniqueName: \"kubernetes.io/projected/205d682b-7592-4b53-bcf6-0300c1084046-kube-api-access-dsgq8\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705285 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705298 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/205d682b-7592-4b53-bcf6-0300c1084046-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705314 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705328 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205d682b-7592-4b53-bcf6-0300c1084046-config\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.705344 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.921988 4718 generic.go:334] "Generic (PLEG): container finished" podID="205d682b-7592-4b53-bcf6-0300c1084046" containerID="423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37" exitCode=0 Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.922049 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.922076 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" event={"ID":"205d682b-7592-4b53-bcf6-0300c1084046","Type":"ContainerDied","Data":"423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37"} Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.922109 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb" event={"ID":"205d682b-7592-4b53-bcf6-0300c1084046","Type":"ContainerDied","Data":"5e0ef202895d3a79d5fe35cf26313eb13a7bae3fa46e02f84471d3f6dcbe0450"} Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.922134 4718 scope.go:117] "RemoveContainer" containerID="423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.924088 4718 generic.go:334] "Generic (PLEG): container finished" podID="ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" containerID="22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df" exitCode=0 Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.924120 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" event={"ID":"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf","Type":"ContainerDied","Data":"22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df"} Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.924189 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" event={"ID":"ef388340-e800-4dbf-a7cf-6d6bd9a1eecf","Type":"ContainerDied","Data":"5a400f702d1b95e53719b526f10044a6491aaa4c062fe0f0ea599df7ecab6d22"} Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.924257 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xwp8r" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.944281 4718 scope.go:117] "RemoveContainer" containerID="423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37" Nov 23 14:57:40 crc kubenswrapper[4718]: E1123 14:57:40.946666 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37\": container with ID starting with 423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37 not found: ID does not exist" containerID="423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.946756 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37"} err="failed to get container status \"423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37\": rpc error: code = NotFound desc = could not find container \"423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37\": container with ID starting with 423996cfbebbc46637a7d2dd67b1496c0db44c1d2f53c084d44bb3b29288ae37 not found: ID does not exist" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.946789 4718 scope.go:117] "RemoveContainer" containerID="22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.960399 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb"] Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.969763 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zmppb"] Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.974858 4718 scope.go:117] "RemoveContainer" containerID="22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.975022 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xwp8r"] Nov 23 14:57:40 crc kubenswrapper[4718]: E1123 14:57:40.977793 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df\": container with ID starting with 22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df not found: ID does not exist" containerID="22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.977846 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df"} err="failed to get container status \"22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df\": rpc error: code = NotFound desc = could not find container \"22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df\": container with ID starting with 22950400603ee3befe70223e6e0fcac3045c88a7bbef4e17e09cdc2ac1de22df not found: ID does not exist" Nov 23 14:57:40 crc kubenswrapper[4718]: I1123 14:57:40.981615 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xwp8r"] Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.357191 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-849dfc4c45-cgtwc"] Nov 23 14:57:41 crc kubenswrapper[4718]: E1123 14:57:41.357629 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" containerName="controller-manager" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.357640 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" containerName="controller-manager" Nov 23 14:57:41 crc kubenswrapper[4718]: E1123 14:57:41.357656 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205d682b-7592-4b53-bcf6-0300c1084046" containerName="route-controller-manager" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.357663 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="205d682b-7592-4b53-bcf6-0300c1084046" containerName="route-controller-manager" Nov 23 14:57:41 crc kubenswrapper[4718]: E1123 14:57:41.357682 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ae4200-81b3-4c2e-a0b7-9fb940b52aec" containerName="registry-server" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.357688 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ae4200-81b3-4c2e-a0b7-9fb940b52aec" containerName="registry-server" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.357793 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" containerName="controller-manager" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.357805 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ae4200-81b3-4c2e-a0b7-9fb940b52aec" containerName="registry-server" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.357814 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="205d682b-7592-4b53-bcf6-0300c1084046" containerName="route-controller-manager" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.358168 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.360473 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.360988 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.361145 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t"] Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.362263 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.364390 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.364773 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.364989 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.365174 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.365313 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.365426 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.365480 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.365621 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.365647 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.368653 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.373221 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-849dfc4c45-cgtwc"] Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.376127 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.389221 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t"] Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbwv\" (UniqueName: \"kubernetes.io/projected/c273f4ab-8e77-441e-9a76-e550ca918d64-kube-api-access-5nbwv\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413562 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64f58f13-cf13-47e6-bce4-d31e8a25cea3-serving-cert\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413622 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b8hq\" (UniqueName: \"kubernetes.io/projected/64f58f13-cf13-47e6-bce4-d31e8a25cea3-kube-api-access-4b8hq\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c273f4ab-8e77-441e-9a76-e550ca918d64-config\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413687 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-config\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413705 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c273f4ab-8e77-441e-9a76-e550ca918d64-client-ca\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413729 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-client-ca\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413754 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-proxy-ca-bundles\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.413781 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c273f4ab-8e77-441e-9a76-e550ca918d64-serving-cert\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514594 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b8hq\" (UniqueName: \"kubernetes.io/projected/64f58f13-cf13-47e6-bce4-d31e8a25cea3-kube-api-access-4b8hq\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514635 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c273f4ab-8e77-441e-9a76-e550ca918d64-config\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514676 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-config\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514695 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c273f4ab-8e77-441e-9a76-e550ca918d64-client-ca\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514716 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-client-ca\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-proxy-ca-bundles\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514770 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c273f4ab-8e77-441e-9a76-e550ca918d64-serving-cert\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nbwv\" (UniqueName: \"kubernetes.io/projected/c273f4ab-8e77-441e-9a76-e550ca918d64-kube-api-access-5nbwv\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.514834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64f58f13-cf13-47e6-bce4-d31e8a25cea3-serving-cert\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.516125 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-proxy-ca-bundles\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.516186 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-client-ca\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.516277 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64f58f13-cf13-47e6-bce4-d31e8a25cea3-config\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.516322 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c273f4ab-8e77-441e-9a76-e550ca918d64-client-ca\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.516366 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c273f4ab-8e77-441e-9a76-e550ca918d64-config\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.525218 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c273f4ab-8e77-441e-9a76-e550ca918d64-serving-cert\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.525280 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64f58f13-cf13-47e6-bce4-d31e8a25cea3-serving-cert\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.531509 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nbwv\" (UniqueName: \"kubernetes.io/projected/c273f4ab-8e77-441e-9a76-e550ca918d64-kube-api-access-5nbwv\") pod \"route-controller-manager-c568db8cb-npk6t\" (UID: \"c273f4ab-8e77-441e-9a76-e550ca918d64\") " pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.532766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b8hq\" (UniqueName: \"kubernetes.io/projected/64f58f13-cf13-47e6-bce4-d31e8a25cea3-kube-api-access-4b8hq\") pod \"controller-manager-849dfc4c45-cgtwc\" (UID: \"64f58f13-cf13-47e6-bce4-d31e8a25cea3\") " pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.673043 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.680996 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.883122 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t"] Nov 23 14:57:41 crc kubenswrapper[4718]: I1123 14:57:41.935117 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" event={"ID":"c273f4ab-8e77-441e-9a76-e550ca918d64","Type":"ContainerStarted","Data":"5e9cd07baecd0db933e379aea41c2a0986c390ff29c233cefc87591199388f7d"} Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.151616 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-849dfc4c45-cgtwc"] Nov 23 14:57:42 crc kubenswrapper[4718]: W1123 14:57:42.163182 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64f58f13_cf13_47e6_bce4_d31e8a25cea3.slice/crio-4894be15b0ca4d84bb78888df274afae5465f6a64466d758e40dc7c88e34b12c WatchSource:0}: Error finding container 4894be15b0ca4d84bb78888df274afae5465f6a64466d758e40dc7c88e34b12c: Status 404 returned error can't find the container with id 4894be15b0ca4d84bb78888df274afae5465f6a64466d758e40dc7c88e34b12c Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.449034 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205d682b-7592-4b53-bcf6-0300c1084046" path="/var/lib/kubelet/pods/205d682b-7592-4b53-bcf6-0300c1084046/volumes" Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.450017 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef388340-e800-4dbf-a7cf-6d6bd9a1eecf" path="/var/lib/kubelet/pods/ef388340-e800-4dbf-a7cf-6d6bd9a1eecf/volumes" Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.941453 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" event={"ID":"c273f4ab-8e77-441e-9a76-e550ca918d64","Type":"ContainerStarted","Data":"7ba37c83d844322c89584031770a4b257d0f12c820e102bb4b98af3b19fdbf19"} Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.941657 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.943821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" event={"ID":"64f58f13-cf13-47e6-bce4-d31e8a25cea3","Type":"ContainerStarted","Data":"363cd1a1b045621b87b08cf2fbcc5bfb2a339eb20d8339b87ca6115995a22e3b"} Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.943867 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" event={"ID":"64f58f13-cf13-47e6-bce4-d31e8a25cea3","Type":"ContainerStarted","Data":"4894be15b0ca4d84bb78888df274afae5465f6a64466d758e40dc7c88e34b12c"} Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.944510 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.948788 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.950479 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.977609 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-849dfc4c45-cgtwc" podStartSLOduration=2.9775943639999998 podStartE2EDuration="2.977594364s" podCreationTimestamp="2025-11-23 14:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:57:42.976623857 +0000 UTC m=+714.216243701" watchObservedRunningTime="2025-11-23 14:57:42.977594364 +0000 UTC m=+714.217214208" Nov 23 14:57:42 crc kubenswrapper[4718]: I1123 14:57:42.979781 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c568db8cb-npk6t" podStartSLOduration=2.979773152 podStartE2EDuration="2.979773152s" podCreationTimestamp="2025-11-23 14:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:57:42.958581468 +0000 UTC m=+714.198201312" watchObservedRunningTime="2025-11-23 14:57:42.979773152 +0000 UTC m=+714.219392986" Nov 23 14:57:45 crc kubenswrapper[4718]: I1123 14:57:45.792722 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:45 crc kubenswrapper[4718]: I1123 14:57:45.793659 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:45 crc kubenswrapper[4718]: I1123 14:57:45.825323 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:45 crc kubenswrapper[4718]: I1123 14:57:45.987031 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-j47c5" Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.567725 4718 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.891168 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn"] Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.893075 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.899923 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-642vh" Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.903816 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn"] Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.985480 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qwl\" (UniqueName: \"kubernetes.io/projected/50ab9973-a819-4833-aaab-955e5d2eb560-kube-api-access-l8qwl\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.985579 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-bundle\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:46 crc kubenswrapper[4718]: I1123 14:57:46.985640 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-util\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.086513 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-bundle\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.086585 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-util\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.086686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qwl\" (UniqueName: \"kubernetes.io/projected/50ab9973-a819-4833-aaab-955e5d2eb560-kube-api-access-l8qwl\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.087040 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-bundle\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.087135 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-util\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.105007 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qwl\" (UniqueName: \"kubernetes.io/projected/50ab9973-a819-4833-aaab-955e5d2eb560-kube-api-access-l8qwl\") pod \"e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.213699 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.640819 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn"] Nov 23 14:57:47 crc kubenswrapper[4718]: W1123 14:57:47.644344 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50ab9973_a819_4833_aaab_955e5d2eb560.slice/crio-2d6081853b66e711b17b5b73ef03a6f66a614ebbacd76a8fdbafa3da1078a6ac WatchSource:0}: Error finding container 2d6081853b66e711b17b5b73ef03a6f66a614ebbacd76a8fdbafa3da1078a6ac: Status 404 returned error can't find the container with id 2d6081853b66e711b17b5b73ef03a6f66a614ebbacd76a8fdbafa3da1078a6ac Nov 23 14:57:47 crc kubenswrapper[4718]: I1123 14:57:47.974272 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" event={"ID":"50ab9973-a819-4833-aaab-955e5d2eb560","Type":"ContainerStarted","Data":"2d6081853b66e711b17b5b73ef03a6f66a614ebbacd76a8fdbafa3da1078a6ac"} Nov 23 14:57:48 crc kubenswrapper[4718]: I1123 14:57:48.982695 4718 generic.go:334] "Generic (PLEG): container finished" podID="50ab9973-a819-4833-aaab-955e5d2eb560" containerID="87aff7ace65d0388bcbc2ec550a5eaca390bb4b64abecbac5915675276f66c8f" exitCode=0 Nov 23 14:57:48 crc kubenswrapper[4718]: I1123 14:57:48.982772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" event={"ID":"50ab9973-a819-4833-aaab-955e5d2eb560","Type":"ContainerDied","Data":"87aff7ace65d0388bcbc2ec550a5eaca390bb4b64abecbac5915675276f66c8f"} Nov 23 14:57:53 crc kubenswrapper[4718]: I1123 14:57:53.008876 4718 generic.go:334] "Generic (PLEG): container finished" podID="50ab9973-a819-4833-aaab-955e5d2eb560" containerID="4f974eaeb51aaf7e97c300475d72ed652358fa3c7c59dd2e87a6492d42edb916" exitCode=0 Nov 23 14:57:53 crc kubenswrapper[4718]: I1123 14:57:53.008917 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" event={"ID":"50ab9973-a819-4833-aaab-955e5d2eb560","Type":"ContainerDied","Data":"4f974eaeb51aaf7e97c300475d72ed652358fa3c7c59dd2e87a6492d42edb916"} Nov 23 14:57:53 crc kubenswrapper[4718]: I1123 14:57:53.053108 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:57:53 crc kubenswrapper[4718]: I1123 14:57:53.053168 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:57:54 crc kubenswrapper[4718]: I1123 14:57:54.023713 4718 generic.go:334] "Generic (PLEG): container finished" podID="50ab9973-a819-4833-aaab-955e5d2eb560" containerID="3d446a688592c4b8b318e3ca70d818983b95b2eddcf67a9b118ed2fdcf6349c7" exitCode=0 Nov 23 14:57:54 crc kubenswrapper[4718]: I1123 14:57:54.023762 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" event={"ID":"50ab9973-a819-4833-aaab-955e5d2eb560","Type":"ContainerDied","Data":"3d446a688592c4b8b318e3ca70d818983b95b2eddcf67a9b118ed2fdcf6349c7"} Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.360523 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.512855 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8qwl\" (UniqueName: \"kubernetes.io/projected/50ab9973-a819-4833-aaab-955e5d2eb560-kube-api-access-l8qwl\") pod \"50ab9973-a819-4833-aaab-955e5d2eb560\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.513248 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-util\") pod \"50ab9973-a819-4833-aaab-955e5d2eb560\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.513341 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-bundle\") pod \"50ab9973-a819-4833-aaab-955e5d2eb560\" (UID: \"50ab9973-a819-4833-aaab-955e5d2eb560\") " Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.514067 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-bundle" (OuterVolumeSpecName: "bundle") pod "50ab9973-a819-4833-aaab-955e5d2eb560" (UID: "50ab9973-a819-4833-aaab-955e5d2eb560"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.520350 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ab9973-a819-4833-aaab-955e5d2eb560-kube-api-access-l8qwl" (OuterVolumeSpecName: "kube-api-access-l8qwl") pod "50ab9973-a819-4833-aaab-955e5d2eb560" (UID: "50ab9973-a819-4833-aaab-955e5d2eb560"). InnerVolumeSpecName "kube-api-access-l8qwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.523587 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-util" (OuterVolumeSpecName: "util") pod "50ab9973-a819-4833-aaab-955e5d2eb560" (UID: "50ab9973-a819-4833-aaab-955e5d2eb560"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.614768 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-util\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.614811 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ab9973-a819-4833-aaab-955e5d2eb560-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:55 crc kubenswrapper[4718]: I1123 14:57:55.614825 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8qwl\" (UniqueName: \"kubernetes.io/projected/50ab9973-a819-4833-aaab-955e5d2eb560-kube-api-access-l8qwl\") on node \"crc\" DevicePath \"\"" Nov 23 14:57:56 crc kubenswrapper[4718]: I1123 14:57:56.040137 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" event={"ID":"50ab9973-a819-4833-aaab-955e5d2eb560","Type":"ContainerDied","Data":"2d6081853b66e711b17b5b73ef03a6f66a614ebbacd76a8fdbafa3da1078a6ac"} Nov 23 14:57:56 crc kubenswrapper[4718]: I1123 14:57:56.040179 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d6081853b66e711b17b5b73ef03a6f66a614ebbacd76a8fdbafa3da1078a6ac" Nov 23 14:57:56 crc kubenswrapper[4718]: I1123 14:57:56.040202 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.363688 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p"] Nov 23 14:57:59 crc kubenswrapper[4718]: E1123 14:57:59.365090 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ab9973-a819-4833-aaab-955e5d2eb560" containerName="util" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.365162 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ab9973-a819-4833-aaab-955e5d2eb560" containerName="util" Nov 23 14:57:59 crc kubenswrapper[4718]: E1123 14:57:59.365228 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ab9973-a819-4833-aaab-955e5d2eb560" containerName="pull" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.365282 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ab9973-a819-4833-aaab-955e5d2eb560" containerName="pull" Nov 23 14:57:59 crc kubenswrapper[4718]: E1123 14:57:59.365355 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ab9973-a819-4833-aaab-955e5d2eb560" containerName="extract" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.365422 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ab9973-a819-4833-aaab-955e5d2eb560" containerName="extract" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.365629 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ab9973-a819-4833-aaab-955e5d2eb560" containerName="extract" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.366340 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.367204 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk2q5\" (UniqueName: \"kubernetes.io/projected/5b1ab78b-6600-4c1f-a302-f3b0369892c2-kube-api-access-rk2q5\") pod \"openstack-operator-controller-operator-597d69585c-9tk5p\" (UID: \"5b1ab78b-6600-4c1f-a302-f3b0369892c2\") " pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.368996 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-n7jvw" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.389705 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p"] Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.468376 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk2q5\" (UniqueName: \"kubernetes.io/projected/5b1ab78b-6600-4c1f-a302-f3b0369892c2-kube-api-access-rk2q5\") pod \"openstack-operator-controller-operator-597d69585c-9tk5p\" (UID: \"5b1ab78b-6600-4c1f-a302-f3b0369892c2\") " pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.486012 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk2q5\" (UniqueName: \"kubernetes.io/projected/5b1ab78b-6600-4c1f-a302-f3b0369892c2-kube-api-access-rk2q5\") pod \"openstack-operator-controller-operator-597d69585c-9tk5p\" (UID: \"5b1ab78b-6600-4c1f-a302-f3b0369892c2\") " pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" Nov 23 14:57:59 crc kubenswrapper[4718]: I1123 14:57:59.683712 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" Nov 23 14:58:00 crc kubenswrapper[4718]: I1123 14:58:00.160112 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p"] Nov 23 14:58:01 crc kubenswrapper[4718]: I1123 14:58:01.070169 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" event={"ID":"5b1ab78b-6600-4c1f-a302-f3b0369892c2","Type":"ContainerStarted","Data":"c161eef80dd51df69f4cabab265f7155f889d9c0cf86d90c54a138b0f6d2ba19"} Nov 23 14:58:06 crc kubenswrapper[4718]: I1123 14:58:06.106068 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" event={"ID":"5b1ab78b-6600-4c1f-a302-f3b0369892c2","Type":"ContainerStarted","Data":"e29039bc530ad59c9f6b745646bcfa4d6bed7e8f734c997937f257ca4339a685"} Nov 23 14:58:08 crc kubenswrapper[4718]: I1123 14:58:08.119026 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" event={"ID":"5b1ab78b-6600-4c1f-a302-f3b0369892c2","Type":"ContainerStarted","Data":"283aae2a51b702d6d96b2bd00f96207976c0d506816518b953851c3aaab52230"} Nov 23 14:58:08 crc kubenswrapper[4718]: I1123 14:58:08.119349 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" Nov 23 14:58:08 crc kubenswrapper[4718]: I1123 14:58:08.157827 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" podStartSLOduration=2.055748386 podStartE2EDuration="9.157809119s" podCreationTimestamp="2025-11-23 14:57:59 +0000 UTC" firstStartedPulling="2025-11-23 14:58:00.174807622 +0000 UTC m=+731.414427456" lastFinishedPulling="2025-11-23 14:58:07.276868345 +0000 UTC m=+738.516488189" observedRunningTime="2025-11-23 14:58:08.153602255 +0000 UTC m=+739.393222099" watchObservedRunningTime="2025-11-23 14:58:08.157809119 +0000 UTC m=+739.397428963" Nov 23 14:58:19 crc kubenswrapper[4718]: I1123 14:58:19.686773 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-597d69585c-9tk5p" Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.053073 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.053667 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.053715 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.054272 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e672c46344261e8af36921b484538b589267121731870b982143bf0d2e76b2f1"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.054326 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://e672c46344261e8af36921b484538b589267121731870b982143bf0d2e76b2f1" gracePeriod=600 Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.231091 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="e672c46344261e8af36921b484538b589267121731870b982143bf0d2e76b2f1" exitCode=0 Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.231155 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"e672c46344261e8af36921b484538b589267121731870b982143bf0d2e76b2f1"} Nov 23 14:58:23 crc kubenswrapper[4718]: I1123 14:58:23.231201 4718 scope.go:117] "RemoveContainer" containerID="e69f70c6d8e0cb888bc50f00b9d126d21fc061450a3827ee8309102066c2eb2c" Nov 23 14:58:24 crc kubenswrapper[4718]: I1123 14:58:24.240857 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"99f9fac909b59d4f75ad1a92599badf9d1f5ce639b6a71381d289bfc1cc670ef"} Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.331759 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.333479 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.336410 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-v7b7r" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.338977 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.340113 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.347091 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r2pj2" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.352291 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.366648 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.376582 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.378059 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.385027 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9snz6" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.385127 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgdf\" (UniqueName: \"kubernetes.io/projected/e402a0ac-7f35-4bab-9948-b664c0ef9636-kube-api-access-6cgdf\") pod \"barbican-operator-controller-manager-75fb479bcc-kwnhv\" (UID: \"e402a0ac-7f35-4bab-9948-b664c0ef9636\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.385224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxq2x\" (UniqueName: \"kubernetes.io/projected/b5aad852-89aa-459e-8771-50ef010620ef-kube-api-access-hxq2x\") pod \"cinder-operator-controller-manager-6498cbf48f-k82wv\" (UID: \"b5aad852-89aa-459e-8771-50ef010620ef\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.409866 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.410910 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.413512 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-svg92" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.415574 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.426277 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.427475 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.432767 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hsf2j" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.435936 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.459846 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.459879 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.460672 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.460745 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.462045 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pcgst" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.474941 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.475856 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.477640 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.479979 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-j5z9z" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.485887 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxq2x\" (UniqueName: \"kubernetes.io/projected/b5aad852-89aa-459e-8771-50ef010620ef-kube-api-access-hxq2x\") pod \"cinder-operator-controller-manager-6498cbf48f-k82wv\" (UID: \"b5aad852-89aa-459e-8771-50ef010620ef\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.485927 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgdf\" (UniqueName: \"kubernetes.io/projected/e402a0ac-7f35-4bab-9948-b664c0ef9636-kube-api-access-6cgdf\") pod \"barbican-operator-controller-manager-75fb479bcc-kwnhv\" (UID: \"e402a0ac-7f35-4bab-9948-b664c0ef9636\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.485985 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjm4\" (UniqueName: \"kubernetes.io/projected/1b5f1764-1a63-4fda-988c-49a8bc17fe79-kube-api-access-xxjm4\") pod \"heat-operator-controller-manager-56f54d6746-k88hn\" (UID: \"1b5f1764-1a63-4fda-988c-49a8bc17fe79\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.486020 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ct59\" (UniqueName: \"kubernetes.io/projected/960d1cfd-fc93-466c-8590-723c68c0bc05-kube-api-access-2ct59\") pod \"glance-operator-controller-manager-7969689c84-h4hzc\" (UID: \"960d1cfd-fc93-466c-8590-723c68c0bc05\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.486045 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5rw\" (UniqueName: \"kubernetes.io/projected/ce59b10a-2110-44a2-9489-b1e06f6a1032-kube-api-access-8k5rw\") pod \"designate-operator-controller-manager-767ccfd65f-v5dn9\" (UID: \"ce59b10a-2110-44a2-9489-b1e06f6a1032\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.486082 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr24j\" (UniqueName: \"kubernetes.io/projected/f4618913-9a14-4f47-89ec-9c4b0a931434-kube-api-access-nr24j\") pod \"horizon-operator-controller-manager-598f69df5d-8trcx\" (UID: \"f4618913-9a14-4f47-89ec-9c4b0a931434\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.493694 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.505182 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.510863 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.510991 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dbc89" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.528074 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgdf\" (UniqueName: \"kubernetes.io/projected/e402a0ac-7f35-4bab-9948-b664c0ef9636-kube-api-access-6cgdf\") pod \"barbican-operator-controller-manager-75fb479bcc-kwnhv\" (UID: \"e402a0ac-7f35-4bab-9948-b664c0ef9636\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.540307 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxq2x\" (UniqueName: \"kubernetes.io/projected/b5aad852-89aa-459e-8771-50ef010620ef-kube-api-access-hxq2x\") pod \"cinder-operator-controller-manager-6498cbf48f-k82wv\" (UID: \"b5aad852-89aa-459e-8771-50ef010620ef\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.558034 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.562840 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.568148 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-96m69" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588487 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr24j\" (UniqueName: \"kubernetes.io/projected/f4618913-9a14-4f47-89ec-9c4b0a931434-kube-api-access-nr24j\") pod \"horizon-operator-controller-manager-598f69df5d-8trcx\" (UID: \"f4618913-9a14-4f47-89ec-9c4b0a931434\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588543 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4dg2\" (UniqueName: \"kubernetes.io/projected/b72e1603-d77f-4edc-87a2-3cc5469620fe-kube-api-access-l4dg2\") pod \"infra-operator-controller-manager-6dd8864d7c-h8b4l\" (UID: \"b72e1603-d77f-4edc-87a2-3cc5469620fe\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588572 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lbb\" (UniqueName: \"kubernetes.io/projected/7cc68ab9-c26a-437b-adcd-977eb063fe25-kube-api-access-w9lbb\") pod \"keystone-operator-controller-manager-7454b96578-4g92d\" (UID: \"7cc68ab9-c26a-437b-adcd-977eb063fe25\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588597 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5sff\" (UniqueName: \"kubernetes.io/projected/06302b9c-68a3-4b48-88d7-cc0885ca0156-kube-api-access-v5sff\") pod \"ironic-operator-controller-manager-99b499f4-rw4vq\" (UID: \"06302b9c-68a3-4b48-88d7-cc0885ca0156\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588627 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjm4\" (UniqueName: \"kubernetes.io/projected/1b5f1764-1a63-4fda-988c-49a8bc17fe79-kube-api-access-xxjm4\") pod \"heat-operator-controller-manager-56f54d6746-k88hn\" (UID: \"1b5f1764-1a63-4fda-988c-49a8bc17fe79\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588653 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72e1603-d77f-4edc-87a2-3cc5469620fe-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-h8b4l\" (UID: \"b72e1603-d77f-4edc-87a2-3cc5469620fe\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588674 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct59\" (UniqueName: \"kubernetes.io/projected/960d1cfd-fc93-466c-8590-723c68c0bc05-kube-api-access-2ct59\") pod \"glance-operator-controller-manager-7969689c84-h4hzc\" (UID: \"960d1cfd-fc93-466c-8590-723c68c0bc05\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k5rw\" (UniqueName: \"kubernetes.io/projected/ce59b10a-2110-44a2-9489-b1e06f6a1032-kube-api-access-8k5rw\") pod \"designate-operator-controller-manager-767ccfd65f-v5dn9\" (UID: \"ce59b10a-2110-44a2-9489-b1e06f6a1032\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.588818 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.611431 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.650767 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.651276 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-47csv"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.653007 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.653016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjm4\" (UniqueName: \"kubernetes.io/projected/1b5f1764-1a63-4fda-988c-49a8bc17fe79-kube-api-access-xxjm4\") pod \"heat-operator-controller-manager-56f54d6746-k88hn\" (UID: \"1b5f1764-1a63-4fda-988c-49a8bc17fe79\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.653314 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k5rw\" (UniqueName: \"kubernetes.io/projected/ce59b10a-2110-44a2-9489-b1e06f6a1032-kube-api-access-8k5rw\") pod \"designate-operator-controller-manager-767ccfd65f-v5dn9\" (UID: \"ce59b10a-2110-44a2-9489-b1e06f6a1032\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.653596 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct59\" (UniqueName: \"kubernetes.io/projected/960d1cfd-fc93-466c-8590-723c68c0bc05-kube-api-access-2ct59\") pod \"glance-operator-controller-manager-7969689c84-h4hzc\" (UID: \"960d1cfd-fc93-466c-8590-723c68c0bc05\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.662835 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-q9vl2" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.664810 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.672060 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr24j\" (UniqueName: \"kubernetes.io/projected/f4618913-9a14-4f47-89ec-9c4b0a931434-kube-api-access-nr24j\") pod \"horizon-operator-controller-manager-598f69df5d-8trcx\" (UID: \"f4618913-9a14-4f47-89ec-9c4b0a931434\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.677138 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-47csv"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.697502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lbb\" (UniqueName: \"kubernetes.io/projected/7cc68ab9-c26a-437b-adcd-977eb063fe25-kube-api-access-w9lbb\") pod \"keystone-operator-controller-manager-7454b96578-4g92d\" (UID: \"7cc68ab9-c26a-437b-adcd-977eb063fe25\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.697565 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5sff\" (UniqueName: \"kubernetes.io/projected/06302b9c-68a3-4b48-88d7-cc0885ca0156-kube-api-access-v5sff\") pod \"ironic-operator-controller-manager-99b499f4-rw4vq\" (UID: \"06302b9c-68a3-4b48-88d7-cc0885ca0156\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.697617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72e1603-d77f-4edc-87a2-3cc5469620fe-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-h8b4l\" (UID: \"b72e1603-d77f-4edc-87a2-3cc5469620fe\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.697671 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4dg2\" (UniqueName: \"kubernetes.io/projected/b72e1603-d77f-4edc-87a2-3cc5469620fe-kube-api-access-l4dg2\") pod \"infra-operator-controller-manager-6dd8864d7c-h8b4l\" (UID: \"b72e1603-d77f-4edc-87a2-3cc5469620fe\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.697695 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wm9m\" (UniqueName: \"kubernetes.io/projected/020fa89c-9d76-439c-aee1-0843636d4469-kube-api-access-2wm9m\") pod \"manila-operator-controller-manager-58f887965d-47csv\" (UID: \"020fa89c-9d76-439c-aee1-0843636d4469\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" Nov 23 14:58:36 crc kubenswrapper[4718]: E1123 14:58:36.698149 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 23 14:58:36 crc kubenswrapper[4718]: E1123 14:58:36.698193 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72e1603-d77f-4edc-87a2-3cc5469620fe-cert podName:b72e1603-d77f-4edc-87a2-3cc5469620fe nodeName:}" failed. No retries permitted until 2025-11-23 14:58:37.198177806 +0000 UTC m=+768.437797650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b72e1603-d77f-4edc-87a2-3cc5469620fe-cert") pod "infra-operator-controller-manager-6dd8864d7c-h8b4l" (UID: "b72e1603-d77f-4edc-87a2-3cc5469620fe") : secret "infra-operator-webhook-server-cert" not found Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.715202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.728210 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.729561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.731754 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.743286 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5sff\" (UniqueName: \"kubernetes.io/projected/06302b9c-68a3-4b48-88d7-cc0885ca0156-kube-api-access-v5sff\") pod \"ironic-operator-controller-manager-99b499f4-rw4vq\" (UID: \"06302b9c-68a3-4b48-88d7-cc0885ca0156\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.743474 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8scvt" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.743992 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lbb\" (UniqueName: \"kubernetes.io/projected/7cc68ab9-c26a-437b-adcd-977eb063fe25-kube-api-access-w9lbb\") pod \"keystone-operator-controller-manager-7454b96578-4g92d\" (UID: \"7cc68ab9-c26a-437b-adcd-977eb063fe25\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.755245 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.756353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4dg2\" (UniqueName: \"kubernetes.io/projected/b72e1603-d77f-4edc-87a2-3cc5469620fe-kube-api-access-l4dg2\") pod \"infra-operator-controller-manager-6dd8864d7c-h8b4l\" (UID: \"b72e1603-d77f-4edc-87a2-3cc5469620fe\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.757091 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.760080 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.770519 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.772041 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-n6q89" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.779191 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.798411 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbs4h\" (UniqueName: \"kubernetes.io/projected/0d78d642-939c-47e3-8d60-665dff178d44-kube-api-access-hbs4h\") pod \"mariadb-operator-controller-manager-54b5986bb8-6785w\" (UID: \"0d78d642-939c-47e3-8d60-665dff178d44\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.798728 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqdfs\" (UniqueName: \"kubernetes.io/projected/0b0e1ffa-6dff-4523-911c-ad0744bd9153-kube-api-access-vqdfs\") pod \"neutron-operator-controller-manager-78bd47f458-g72r5\" (UID: \"0b0e1ffa-6dff-4523-911c-ad0744bd9153\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.798767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wm9m\" (UniqueName: \"kubernetes.io/projected/020fa89c-9d76-439c-aee1-0843636d4469-kube-api-access-2wm9m\") pod \"manila-operator-controller-manager-58f887965d-47csv\" (UID: \"020fa89c-9d76-439c-aee1-0843636d4469\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.828779 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.829890 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.859033 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wm9m\" (UniqueName: \"kubernetes.io/projected/020fa89c-9d76-439c-aee1-0843636d4469-kube-api-access-2wm9m\") pod \"manila-operator-controller-manager-58f887965d-47csv\" (UID: \"020fa89c-9d76-439c-aee1-0843636d4469\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.891239 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.895615 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.899793 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqdfs\" (UniqueName: \"kubernetes.io/projected/0b0e1ffa-6dff-4523-911c-ad0744bd9153-kube-api-access-vqdfs\") pod \"neutron-operator-controller-manager-78bd47f458-g72r5\" (UID: \"0b0e1ffa-6dff-4523-911c-ad0744bd9153\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.899853 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbs4h\" (UniqueName: \"kubernetes.io/projected/0d78d642-939c-47e3-8d60-665dff178d44-kube-api-access-hbs4h\") pod \"mariadb-operator-controller-manager-54b5986bb8-6785w\" (UID: \"0d78d642-939c-47e3-8d60-665dff178d44\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.900014 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rxtcx" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.900285 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.941031 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqdfs\" (UniqueName: \"kubernetes.io/projected/0b0e1ffa-6dff-4523-911c-ad0744bd9153-kube-api-access-vqdfs\") pod \"neutron-operator-controller-manager-78bd47f458-g72r5\" (UID: \"0b0e1ffa-6dff-4523-911c-ad0744bd9153\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.943502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbs4h\" (UniqueName: \"kubernetes.io/projected/0d78d642-939c-47e3-8d60-665dff178d44-kube-api-access-hbs4h\") pod \"mariadb-operator-controller-manager-54b5986bb8-6785w\" (UID: \"0d78d642-939c-47e3-8d60-665dff178d44\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.947515 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q"] Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.948891 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.959030 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lcdxs" Nov 23 14:58:36 crc kubenswrapper[4718]: I1123 14:58:36.974520 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.004132 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.005088 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.008071 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5297\" (UniqueName: \"kubernetes.io/projected/968c85cb-d53b-40e8-9651-7127fc58f61a-kube-api-access-m5297\") pod \"octavia-operator-controller-manager-54cfbf4c7d-kpg5q\" (UID: \"968c85cb-d53b-40e8-9651-7127fc58f61a\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.008121 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/934a178a-2178-4c2d-bda8-9bb817f78644-kube-api-access-zj6lg\") pod \"nova-operator-controller-manager-cfbb9c588-q9nlr\" (UID: \"934a178a-2178-4c2d-bda8-9bb817f78644\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.008974 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.011371 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7nl58" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.015197 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.036516 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.038036 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.039895 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.041601 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.043676 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.044510 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9pn5f" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.045602 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qd4zg" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.046579 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.046985 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.048067 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.058386 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.059293 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cz9p4" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.062230 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.104698 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.105864 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.109279 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96rfx\" (UniqueName: \"kubernetes.io/projected/9d46e777-1d50-42a0-b20f-a24b155a0e43-kube-api-access-96rfx\") pod \"placement-operator-controller-manager-5b797b8dff-7wn4t\" (UID: \"9d46e777-1d50-42a0-b20f-a24b155a0e43\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.109328 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/934a178a-2178-4c2d-bda8-9bb817f78644-kube-api-access-zj6lg\") pod \"nova-operator-controller-manager-cfbb9c588-q9nlr\" (UID: \"934a178a-2178-4c2d-bda8-9bb817f78644\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.109350 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4m2h\" (UniqueName: \"kubernetes.io/projected/306074ad-d60e-41a2-975b-901d8874be23-kube-api-access-f4m2h\") pod \"ovn-operator-controller-manager-54fc5f65b7-5wtcx\" (UID: \"306074ad-d60e-41a2-975b-901d8874be23\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.109370 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk\" (UID: \"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.109410 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqkw\" (UniqueName: \"kubernetes.io/projected/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-kube-api-access-hvqkw\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk\" (UID: \"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.109432 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rgk\" (UniqueName: \"kubernetes.io/projected/313f2889-e11a-440a-8358-612780f4a348-kube-api-access-28rgk\") pod \"swift-operator-controller-manager-d656998f4-p7dq2\" (UID: \"313f2889-e11a-440a-8358-612780f4a348\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.109494 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5297\" (UniqueName: \"kubernetes.io/projected/968c85cb-d53b-40e8-9651-7127fc58f61a-kube-api-access-m5297\") pod \"octavia-operator-controller-manager-54cfbf4c7d-kpg5q\" (UID: \"968c85cb-d53b-40e8-9651-7127fc58f61a\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.144531 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.148390 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5297\" (UniqueName: \"kubernetes.io/projected/968c85cb-d53b-40e8-9651-7127fc58f61a-kube-api-access-m5297\") pod \"octavia-operator-controller-manager-54cfbf4c7d-kpg5q\" (UID: \"968c85cb-d53b-40e8-9651-7127fc58f61a\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.155397 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.156702 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.161264 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-fsfkn" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.169774 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/934a178a-2178-4c2d-bda8-9bb817f78644-kube-api-access-zj6lg\") pod \"nova-operator-controller-manager-cfbb9c588-q9nlr\" (UID: \"934a178a-2178-4c2d-bda8-9bb817f78644\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.180299 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.220332 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-kt77s"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.223371 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.226168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqkw\" (UniqueName: \"kubernetes.io/projected/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-kube-api-access-hvqkw\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk\" (UID: \"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.226217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-449b4\" (UniqueName: \"kubernetes.io/projected/424e5dfa-98a5-480c-aeb9-8f279b2fdee4-kube-api-access-449b4\") pod \"telemetry-operator-controller-manager-6d4bf84b58-qtpps\" (UID: \"424e5dfa-98a5-480c-aeb9-8f279b2fdee4\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.226243 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28rgk\" (UniqueName: \"kubernetes.io/projected/313f2889-e11a-440a-8358-612780f4a348-kube-api-access-28rgk\") pod \"swift-operator-controller-manager-d656998f4-p7dq2\" (UID: \"313f2889-e11a-440a-8358-612780f4a348\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.226273 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72e1603-d77f-4edc-87a2-3cc5469620fe-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-h8b4l\" (UID: \"b72e1603-d77f-4edc-87a2-3cc5469620fe\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.226327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96rfx\" (UniqueName: \"kubernetes.io/projected/9d46e777-1d50-42a0-b20f-a24b155a0e43-kube-api-access-96rfx\") pod \"placement-operator-controller-manager-5b797b8dff-7wn4t\" (UID: \"9d46e777-1d50-42a0-b20f-a24b155a0e43\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.226353 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4m2h\" (UniqueName: \"kubernetes.io/projected/306074ad-d60e-41a2-975b-901d8874be23-kube-api-access-f4m2h\") pod \"ovn-operator-controller-manager-54fc5f65b7-5wtcx\" (UID: \"306074ad-d60e-41a2-975b-901d8874be23\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.226378 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk\" (UID: \"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: E1123 14:58:37.226526 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 14:58:37 crc kubenswrapper[4718]: E1123 14:58:37.226570 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-cert podName:ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d nodeName:}" failed. No retries permitted until 2025-11-23 14:58:37.726555708 +0000 UTC m=+768.966175552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" (UID: "ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.230702 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-r75l8" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.240480 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.262998 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-kt77s"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.263048 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28rgk\" (UniqueName: \"kubernetes.io/projected/313f2889-e11a-440a-8358-612780f4a348-kube-api-access-28rgk\") pod \"swift-operator-controller-manager-d656998f4-p7dq2\" (UID: \"313f2889-e11a-440a-8358-612780f4a348\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.273994 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72e1603-d77f-4edc-87a2-3cc5469620fe-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-h8b4l\" (UID: \"b72e1603-d77f-4edc-87a2-3cc5469620fe\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.278650 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.280331 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.282666 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c2jz5" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.289107 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.304167 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqkw\" (UniqueName: \"kubernetes.io/projected/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-kube-api-access-hvqkw\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk\" (UID: \"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.317598 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4m2h\" (UniqueName: \"kubernetes.io/projected/306074ad-d60e-41a2-975b-901d8874be23-kube-api-access-f4m2h\") pod \"ovn-operator-controller-manager-54fc5f65b7-5wtcx\" (UID: \"306074ad-d60e-41a2-975b-901d8874be23\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.318630 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.318987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96rfx\" (UniqueName: \"kubernetes.io/projected/9d46e777-1d50-42a0-b20f-a24b155a0e43-kube-api-access-96rfx\") pod \"placement-operator-controller-manager-5b797b8dff-7wn4t\" (UID: \"9d46e777-1d50-42a0-b20f-a24b155a0e43\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.327403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-449b4\" (UniqueName: \"kubernetes.io/projected/424e5dfa-98a5-480c-aeb9-8f279b2fdee4-kube-api-access-449b4\") pod \"telemetry-operator-controller-manager-6d4bf84b58-qtpps\" (UID: \"424e5dfa-98a5-480c-aeb9-8f279b2fdee4\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.327508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgts\" (UniqueName: \"kubernetes.io/projected/8e9db0b8-bd2b-45fa-8105-2524e81bcd70-kube-api-access-wmgts\") pod \"test-operator-controller-manager-b4c496f69-kt77s\" (UID: \"8e9db0b8-bd2b-45fa-8105-2524e81bcd70\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.327748 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fggq6\" (UniqueName: \"kubernetes.io/projected/e3358e41-4842-4768-8235-96a8166d43b0-kube-api-access-fggq6\") pod \"watcher-operator-controller-manager-8c6448b9f-bfnhk\" (UID: \"e3358e41-4842-4768-8235-96a8166d43b0\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.348924 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-449b4\" (UniqueName: \"kubernetes.io/projected/424e5dfa-98a5-480c-aeb9-8f279b2fdee4-kube-api-access-449b4\") pod \"telemetry-operator-controller-manager-6d4bf84b58-qtpps\" (UID: \"424e5dfa-98a5-480c-aeb9-8f279b2fdee4\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.352048 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.357999 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.358668 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.362510 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.365117 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-thwqt" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.365285 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.381107 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.382135 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.385985 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-76jqb" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.395959 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.396198 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.429133 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxfbm\" (UniqueName: \"kubernetes.io/projected/76e89747-f3cb-45cd-beff-22193095b455-kube-api-access-bxfbm\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb\" (UID: \"76e89747-f3cb-45cd-beff-22193095b455\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.429790 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.432596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f871886-2351-4861-a1d6-3f7711fa936e-cert\") pod \"openstack-operator-controller-manager-669b8498dc-8hbzm\" (UID: \"2f871886-2351-4861-a1d6-3f7711fa936e\") " pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.433104 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvpv7\" (UniqueName: \"kubernetes.io/projected/2f871886-2351-4861-a1d6-3f7711fa936e-kube-api-access-jvpv7\") pod \"openstack-operator-controller-manager-669b8498dc-8hbzm\" (UID: \"2f871886-2351-4861-a1d6-3f7711fa936e\") " pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.433174 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgts\" (UniqueName: \"kubernetes.io/projected/8e9db0b8-bd2b-45fa-8105-2524e81bcd70-kube-api-access-wmgts\") pod \"test-operator-controller-manager-b4c496f69-kt77s\" (UID: \"8e9db0b8-bd2b-45fa-8105-2524e81bcd70\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.433214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fggq6\" (UniqueName: \"kubernetes.io/projected/e3358e41-4842-4768-8235-96a8166d43b0-kube-api-access-fggq6\") pod \"watcher-operator-controller-manager-8c6448b9f-bfnhk\" (UID: \"e3358e41-4842-4768-8235-96a8166d43b0\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.452878 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.484178 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgts\" (UniqueName: \"kubernetes.io/projected/8e9db0b8-bd2b-45fa-8105-2524e81bcd70-kube-api-access-wmgts\") pod \"test-operator-controller-manager-b4c496f69-kt77s\" (UID: \"8e9db0b8-bd2b-45fa-8105-2524e81bcd70\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.485601 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fggq6\" (UniqueName: \"kubernetes.io/projected/e3358e41-4842-4768-8235-96a8166d43b0-kube-api-access-fggq6\") pod \"watcher-operator-controller-manager-8c6448b9f-bfnhk\" (UID: \"e3358e41-4842-4768-8235-96a8166d43b0\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.498866 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.534394 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxfbm\" (UniqueName: \"kubernetes.io/projected/76e89747-f3cb-45cd-beff-22193095b455-kube-api-access-bxfbm\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb\" (UID: \"76e89747-f3cb-45cd-beff-22193095b455\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.534515 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f871886-2351-4861-a1d6-3f7711fa936e-cert\") pod \"openstack-operator-controller-manager-669b8498dc-8hbzm\" (UID: \"2f871886-2351-4861-a1d6-3f7711fa936e\") " pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.534563 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvpv7\" (UniqueName: \"kubernetes.io/projected/2f871886-2351-4861-a1d6-3f7711fa936e-kube-api-access-jvpv7\") pod \"openstack-operator-controller-manager-669b8498dc-8hbzm\" (UID: \"2f871886-2351-4861-a1d6-3f7711fa936e\") " pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:37 crc kubenswrapper[4718]: E1123 14:58:37.536187 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 23 14:58:37 crc kubenswrapper[4718]: E1123 14:58:37.536273 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f871886-2351-4861-a1d6-3f7711fa936e-cert podName:2f871886-2351-4861-a1d6-3f7711fa936e nodeName:}" failed. No retries permitted until 2025-11-23 14:58:38.036250687 +0000 UTC m=+769.275870531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f871886-2351-4861-a1d6-3f7711fa936e-cert") pod "openstack-operator-controller-manager-669b8498dc-8hbzm" (UID: "2f871886-2351-4861-a1d6-3f7711fa936e") : secret "webhook-server-cert" not found Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.551093 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvpv7\" (UniqueName: \"kubernetes.io/projected/2f871886-2351-4861-a1d6-3f7711fa936e-kube-api-access-jvpv7\") pod \"openstack-operator-controller-manager-669b8498dc-8hbzm\" (UID: \"2f871886-2351-4861-a1d6-3f7711fa936e\") " pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.557042 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxfbm\" (UniqueName: \"kubernetes.io/projected/76e89747-f3cb-45cd-beff-22193095b455-kube-api-access-bxfbm\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb\" (UID: \"76e89747-f3cb-45cd-beff-22193095b455\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.634731 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.683794 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.696161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.717952 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv"] Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.724056 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.737522 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk\" (UID: \"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.743036 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk\" (UID: \"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:37 crc kubenswrapper[4718]: I1123 14:58:37.762836 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.050154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f871886-2351-4861-a1d6-3f7711fa936e-cert\") pod \"openstack-operator-controller-manager-669b8498dc-8hbzm\" (UID: \"2f871886-2351-4861-a1d6-3f7711fa936e\") " pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.056115 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f871886-2351-4861-a1d6-3f7711fa936e-cert\") pod \"openstack-operator-controller-manager-669b8498dc-8hbzm\" (UID: \"2f871886-2351-4861-a1d6-3f7711fa936e\") " pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.110868 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.121648 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.178799 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc"] Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.183730 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4618913_9a14_4f47_89ec_9c4b0a931434.slice/crio-7a840065eaddd6d106757d94543653f363bbf7af23c8413b0fec5889e2943fc8 WatchSource:0}: Error finding container 7a840065eaddd6d106757d94543653f363bbf7af23c8413b0fec5889e2943fc8: Status 404 returned error can't find the container with id 7a840065eaddd6d106757d94543653f363bbf7af23c8413b0fec5889e2943fc8 Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.186839 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.190459 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5"] Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.194318 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0e1ffa_6dff_4523_911c_ad0744bd9153.slice/crio-361f1b74e62af2e59a7b231a61f01ae30f0c2c6dadbdb908135177c5a14d4cdd WatchSource:0}: Error finding container 361f1b74e62af2e59a7b231a61f01ae30f0c2c6dadbdb908135177c5a14d4cdd: Status 404 returned error can't find the container with id 361f1b74e62af2e59a7b231a61f01ae30f0c2c6dadbdb908135177c5a14d4cdd Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.309031 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.326886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" event={"ID":"e402a0ac-7f35-4bab-9948-b664c0ef9636","Type":"ContainerStarted","Data":"ebe0746558f66bb28be4db8b11f194f44936bb4e6f91255254f66e00d279e6f8"} Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.328070 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" event={"ID":"f4618913-9a14-4f47-89ec-9c4b0a931434","Type":"ContainerStarted","Data":"7a840065eaddd6d106757d94543653f363bbf7af23c8413b0fec5889e2943fc8"} Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.329080 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" event={"ID":"1b5f1764-1a63-4fda-988c-49a8bc17fe79","Type":"ContainerStarted","Data":"eaaac7e423ace3ede764b22ae50662c451397c88a117084a85fedac7d7362fb2"} Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.329958 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" event={"ID":"b5aad852-89aa-459e-8771-50ef010620ef","Type":"ContainerStarted","Data":"3f63e5920f9118627f52c59b6966bbe17bb25b526e29d8b2a10cf5c1a7a2c7bb"} Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.330703 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" event={"ID":"0b0e1ffa-6dff-4523-911c-ad0744bd9153","Type":"ContainerStarted","Data":"361f1b74e62af2e59a7b231a61f01ae30f0c2c6dadbdb908135177c5a14d4cdd"} Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.331464 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" event={"ID":"960d1cfd-fc93-466c-8590-723c68c0bc05","Type":"ContainerStarted","Data":"f78e7810871f012d36da98bcc0fb19dd5555ba7fab4a022b19df83384c49f26d"} Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.332195 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" event={"ID":"ce59b10a-2110-44a2-9489-b1e06f6a1032","Type":"ContainerStarted","Data":"ed004b5f80d40cd43ff4ec4ccaf2a001efd60d860e981daf090e98c0c69c6637"} Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.644641 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.669708 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.681309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.691616 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l"] Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.695810 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06302b9c_68a3_4b48_88d7_cc0885ca0156.slice/crio-819b3b688289546b8bc66ad73a7c46f79e2e5ab7cae2470db12fdbc94871b518 WatchSource:0}: Error finding container 819b3b688289546b8bc66ad73a7c46f79e2e5ab7cae2470db12fdbc94871b518: Status 404 returned error can't find the container with id 819b3b688289546b8bc66ad73a7c46f79e2e5ab7cae2470db12fdbc94871b518 Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.700721 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod306074ad_d60e_41a2_975b_901d8874be23.slice/crio-c254087f1d62ef07dd61cae60236a6ecd98d2bf857028ae55d7b8e3185ad40de WatchSource:0}: Error finding container c254087f1d62ef07dd61cae60236a6ecd98d2bf857028ae55d7b8e3185ad40de: Status 404 returned error can't find the container with id c254087f1d62ef07dd61cae60236a6ecd98d2bf857028ae55d7b8e3185ad40de Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.709708 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.728086 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-47csv"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.741195 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.746964 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2"] Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.752721 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod020fa89c_9d76_439c_aee1_0843636d4469.slice/crio-6ccd490c28c760a1bd23d800e98ce626a5a7e6e80bb7881213e39b52cb3d2e64 WatchSource:0}: Error finding container 6ccd490c28c760a1bd23d800e98ce626a5a7e6e80bb7881213e39b52cb3d2e64: Status 404 returned error can't find the container with id 6ccd490c28c760a1bd23d800e98ce626a5a7e6e80bb7881213e39b52cb3d2e64 Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.755584 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr"] Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.758364 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod313f2889_e11a_440a_8358_612780f4a348.slice/crio-00f5392829658a492275242814b684593c7d96ca83132065077bfc0c37a0ded8 WatchSource:0}: Error finding container 00f5392829658a492275242814b684593c7d96ca83132065077bfc0c37a0ded8: Status 404 returned error can't find the container with id 00f5392829658a492275242814b684593c7d96ca83132065077bfc0c37a0ded8 Nov 23 14:58:38 crc kubenswrapper[4718]: E1123 14:58:38.761631 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28rgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d656998f4-p7dq2_openstack-operators(313f2889-e11a-440a-8358-612780f4a348): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.785400 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk"] Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.798738 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8cdd4c_2e47_4c34_b19a_ebe32f80fe3d.slice/crio-e2385641152d371544fd1683a2ce6aeabe79704c3f15dcdd0fdd8c1661a0c07d WatchSource:0}: Error finding container e2385641152d371544fd1683a2ce6aeabe79704c3f15dcdd0fdd8c1661a0c07d: Status 404 returned error can't find the container with id e2385641152d371544fd1683a2ce6aeabe79704c3f15dcdd0fdd8c1661a0c07d Nov 23 14:58:38 crc kubenswrapper[4718]: E1123 14:58:38.801958 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvqkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk_openstack-operators(ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.955215 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.960421 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk"] Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.964197 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e89747_f3cb_45cd_beff_22193095b455.slice/crio-5c3623d36f6ffd4a559f0f6fb8fb20905106335dd1b3b02cd0cf2bb34ce5c2c3 WatchSource:0}: Error finding container 5c3623d36f6ffd4a559f0f6fb8fb20905106335dd1b3b02cd0cf2bb34ce5c2c3: Status 404 returned error can't find the container with id 5c3623d36f6ffd4a559f0f6fb8fb20905106335dd1b3b02cd0cf2bb34ce5c2c3 Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.973344 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t"] Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.982168 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-kt77s"] Nov 23 14:58:38 crc kubenswrapper[4718]: E1123 14:58:38.982334 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fggq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-8c6448b9f-bfnhk_openstack-operators(e3358e41-4842-4768-8235-96a8166d43b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.985919 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps"] Nov 23 14:58:38 crc kubenswrapper[4718]: E1123 14:58:38.990548 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96rfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b797b8dff-7wn4t_openstack-operators(9d46e777-1d50-42a0-b20f-a24b155a0e43): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.991420 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e9db0b8_bd2b_45fa_8105_2524e81bcd70.slice/crio-4bfb174d2fe4be9d67080614db413e54b339490e9f4ea100641a7d73d85acb51 WatchSource:0}: Error finding container 4bfb174d2fe4be9d67080614db413e54b339490e9f4ea100641a7d73d85acb51: Status 404 returned error can't find the container with id 4bfb174d2fe4be9d67080614db413e54b339490e9f4ea100641a7d73d85acb51 Nov 23 14:58:38 crc kubenswrapper[4718]: E1123 14:58:38.994650 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wmgts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-b4c496f69-kt77s_openstack-operators(8e9db0b8-bd2b-45fa-8105-2524e81bcd70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 14:58:38 crc kubenswrapper[4718]: W1123 14:58:38.994829 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424e5dfa_98a5_480c_aeb9_8f279b2fdee4.slice/crio-1acb9c397c281f8b1212fbbad5443ed3273ef484ef7f2f635ef9608092432942 WatchSource:0}: Error finding container 1acb9c397c281f8b1212fbbad5443ed3273ef484ef7f2f635ef9608092432942: Status 404 returned error can't find the container with id 1acb9c397c281f8b1212fbbad5443ed3273ef484ef7f2f635ef9608092432942 Nov 23 14:58:38 crc kubenswrapper[4718]: I1123 14:58:38.995249 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm"] Nov 23 14:58:39 crc kubenswrapper[4718]: E1123 14:58:39.007910 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-449b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d4bf84b58-qtpps_openstack-operators(424e5dfa-98a5-480c-aeb9-8f279b2fdee4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 14:58:39 crc kubenswrapper[4718]: W1123 14:58:39.017302 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f871886_2351_4861_a1d6_3f7711fa936e.slice/crio-542ded6263baf5bbc2a889097f1c80da2a1ee021bf206b96334d818c04b6ab46 WatchSource:0}: Error finding container 542ded6263baf5bbc2a889097f1c80da2a1ee021bf206b96334d818c04b6ab46: Status 404 returned error can't find the container with id 542ded6263baf5bbc2a889097f1c80da2a1ee021bf206b96334d818c04b6ab46 Nov 23 14:58:39 crc kubenswrapper[4718]: E1123 14:58:39.220190 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" podUID="ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d" Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.365514 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" event={"ID":"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d","Type":"ContainerStarted","Data":"a7485f8493868a335e467052eb987bce2193c22a8ff55819b2dfa826716e1551"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.365563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" event={"ID":"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d","Type":"ContainerStarted","Data":"e2385641152d371544fd1683a2ce6aeabe79704c3f15dcdd0fdd8c1661a0c07d"} Nov 23 14:58:39 crc kubenswrapper[4718]: E1123 14:58:39.367590 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" podUID="ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d" Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.368203 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" event={"ID":"7cc68ab9-c26a-437b-adcd-977eb063fe25","Type":"ContainerStarted","Data":"6fd563c14e227f15294f77c49fb188b024f4c43aa9bc76dcd02b6740995402a2"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.369342 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" event={"ID":"b72e1603-d77f-4edc-87a2-3cc5469620fe","Type":"ContainerStarted","Data":"dfdc82fbb71f96393d60d940333644748f36b6ac059f23bfc8e1ce8741c2f167"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.370131 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" event={"ID":"2f871886-2351-4861-a1d6-3f7711fa936e","Type":"ContainerStarted","Data":"542ded6263baf5bbc2a889097f1c80da2a1ee021bf206b96334d818c04b6ab46"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.382683 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" event={"ID":"313f2889-e11a-440a-8358-612780f4a348","Type":"ContainerStarted","Data":"67158b20d73e792924d657f79ecbb7364b2bfc5bf441094c4136803994e30ea5"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.382731 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" event={"ID":"313f2889-e11a-440a-8358-612780f4a348","Type":"ContainerStarted","Data":"00f5392829658a492275242814b684593c7d96ca83132065077bfc0c37a0ded8"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.394979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" event={"ID":"76e89747-f3cb-45cd-beff-22193095b455","Type":"ContainerStarted","Data":"5c3623d36f6ffd4a559f0f6fb8fb20905106335dd1b3b02cd0cf2bb34ce5c2c3"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.396091 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" event={"ID":"934a178a-2178-4c2d-bda8-9bb817f78644","Type":"ContainerStarted","Data":"bf3627941efa7d6aa250e678442e8efe8792d97636c2a6323caff25eb6ceb892"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.398348 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" event={"ID":"424e5dfa-98a5-480c-aeb9-8f279b2fdee4","Type":"ContainerStarted","Data":"1acb9c397c281f8b1212fbbad5443ed3273ef484ef7f2f635ef9608092432942"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.400030 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" event={"ID":"0d78d642-939c-47e3-8d60-665dff178d44","Type":"ContainerStarted","Data":"2c41bd2655004a92a1ce4fc996d0d462e2a5b680e47cdb42929b402d6bbe1634"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.403820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" event={"ID":"e3358e41-4842-4768-8235-96a8166d43b0","Type":"ContainerStarted","Data":"5a3fddbb1b783155df24db17a28f6218ef1f150ccd2001482651ac85ad9d5834"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.407027 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" event={"ID":"9d46e777-1d50-42a0-b20f-a24b155a0e43","Type":"ContainerStarted","Data":"0cb6fe70b2cc01bce8071e6bd2e08d8242fb51e95c7a05badb7b5ce1bc9f4f97"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.412166 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" event={"ID":"06302b9c-68a3-4b48-88d7-cc0885ca0156","Type":"ContainerStarted","Data":"819b3b688289546b8bc66ad73a7c46f79e2e5ab7cae2470db12fdbc94871b518"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.428503 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" event={"ID":"8e9db0b8-bd2b-45fa-8105-2524e81bcd70","Type":"ContainerStarted","Data":"4bfb174d2fe4be9d67080614db413e54b339490e9f4ea100641a7d73d85acb51"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.429632 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" event={"ID":"968c85cb-d53b-40e8-9651-7127fc58f61a","Type":"ContainerStarted","Data":"22c464b1df944b49db4276380a4307a53954800d7c09b659d7e892d9af2cffe0"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.430913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" event={"ID":"306074ad-d60e-41a2-975b-901d8874be23","Type":"ContainerStarted","Data":"c254087f1d62ef07dd61cae60236a6ecd98d2bf857028ae55d7b8e3185ad40de"} Nov 23 14:58:39 crc kubenswrapper[4718]: I1123 14:58:39.433464 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" event={"ID":"020fa89c-9d76-439c-aee1-0843636d4469","Type":"ContainerStarted","Data":"6ccd490c28c760a1bd23d800e98ce626a5a7e6e80bb7881213e39b52cb3d2e64"} Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.190541 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" podUID="313f2889-e11a-440a-8358-612780f4a348" Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.439045 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" podUID="9d46e777-1d50-42a0-b20f-a24b155a0e43" Nov 23 14:58:40 crc kubenswrapper[4718]: I1123 14:58:40.458238 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" event={"ID":"e3358e41-4842-4768-8235-96a8166d43b0","Type":"ContainerStarted","Data":"5bccb5ad02918225b0b2f2d210c2092a108d07aa4cf0c947f97ee5e833b6119f"} Nov 23 14:58:40 crc kubenswrapper[4718]: I1123 14:58:40.458282 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" event={"ID":"9d46e777-1d50-42a0-b20f-a24b155a0e43","Type":"ContainerStarted","Data":"ae4052f69b1ddb744fb7747fb12d5d8c17b547a8f0a794a9b415446566d390c6"} Nov 23 14:58:40 crc kubenswrapper[4718]: I1123 14:58:40.458297 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" event={"ID":"2f871886-2351-4861-a1d6-3f7711fa936e","Type":"ContainerStarted","Data":"ebe7469dd492d1e3ef5769bbefa13463eec834a29840ca5703a511891ec4345c"} Nov 23 14:58:40 crc kubenswrapper[4718]: I1123 14:58:40.458310 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" event={"ID":"8e9db0b8-bd2b-45fa-8105-2524e81bcd70","Type":"ContainerStarted","Data":"00b9abd02ca7b4c41fa43623d77702bc3234377463ebab214007ce2926bdb93b"} Nov 23 14:58:40 crc kubenswrapper[4718]: I1123 14:58:40.458321 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" event={"ID":"424e5dfa-98a5-480c-aeb9-8f279b2fdee4","Type":"ContainerStarted","Data":"5aa9baeca815f29b632fce836006a6f05f012fd5a77555a9df5db2b8102d273c"} Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.458991 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" podUID="313f2889-e11a-440a-8358-612780f4a348" Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.459334 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" podUID="ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d" Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.459405 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" podUID="9d46e777-1d50-42a0-b20f-a24b155a0e43" Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.480179 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" podUID="8e9db0b8-bd2b-45fa-8105-2524e81bcd70" Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.492429 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" podUID="e3358e41-4842-4768-8235-96a8166d43b0" Nov 23 14:58:40 crc kubenswrapper[4718]: E1123 14:58:40.492493 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" podUID="424e5dfa-98a5-480c-aeb9-8f279b2fdee4" Nov 23 14:58:41 crc kubenswrapper[4718]: I1123 14:58:41.469433 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" event={"ID":"2f871886-2351-4861-a1d6-3f7711fa936e","Type":"ContainerStarted","Data":"19e649bf1a6820128da4bd2174500953d85100911ad18dc01a76d96bbb9f2419"} Nov 23 14:58:41 crc kubenswrapper[4718]: I1123 14:58:41.471107 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:41 crc kubenswrapper[4718]: E1123 14:58:41.471751 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" podUID="8e9db0b8-bd2b-45fa-8105-2524e81bcd70" Nov 23 14:58:41 crc kubenswrapper[4718]: E1123 14:58:41.472050 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" podUID="e3358e41-4842-4768-8235-96a8166d43b0" Nov 23 14:58:41 crc kubenswrapper[4718]: E1123 14:58:41.472103 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" podUID="424e5dfa-98a5-480c-aeb9-8f279b2fdee4" Nov 23 14:58:41 crc kubenswrapper[4718]: E1123 14:58:41.475231 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" podUID="9d46e777-1d50-42a0-b20f-a24b155a0e43" Nov 23 14:58:41 crc kubenswrapper[4718]: I1123 14:58:41.532262 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" podStartSLOduration=4.532245851 podStartE2EDuration="4.532245851s" podCreationTimestamp="2025-11-23 14:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 14:58:41.527279877 +0000 UTC m=+772.766899721" watchObservedRunningTime="2025-11-23 14:58:41.532245851 +0000 UTC m=+772.771865695" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.244203 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ltj6g"] Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.246173 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.248363 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltj6g"] Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.331259 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-catalog-content\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.331583 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-utilities\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.331631 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxchc\" (UniqueName: \"kubernetes.io/projected/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-kube-api-access-dxchc\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.433492 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-utilities\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.434076 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxchc\" (UniqueName: \"kubernetes.io/projected/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-kube-api-access-dxchc\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.434015 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-utilities\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.434634 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-catalog-content\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.434950 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-catalog-content\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.455307 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxchc\" (UniqueName: \"kubernetes.io/projected/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-kube-api-access-dxchc\") pod \"community-operators-ltj6g\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:43 crc kubenswrapper[4718]: I1123 14:58:43.605538 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.585906 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmb9d"] Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.589535 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.604537 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmb9d"] Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.695675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p597\" (UniqueName: \"kubernetes.io/projected/f66c151c-281c-4da0-b977-d0fa8b18ba33-kube-api-access-2p597\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.695816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-utilities\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.695939 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-catalog-content\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.797350 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p597\" (UniqueName: \"kubernetes.io/projected/f66c151c-281c-4da0-b977-d0fa8b18ba33-kube-api-access-2p597\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.797411 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-utilities\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.797467 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-catalog-content\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.797832 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-utilities\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.797895 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-catalog-content\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.848514 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p597\" (UniqueName: \"kubernetes.io/projected/f66c151c-281c-4da0-b977-d0fa8b18ba33-kube-api-access-2p597\") pod \"redhat-operators-nmb9d\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:47 crc kubenswrapper[4718]: I1123 14:58:47.945774 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:58:48 crc kubenswrapper[4718]: I1123 14:58:48.314055 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-669b8498dc-8hbzm" Nov 23 14:58:54 crc kubenswrapper[4718]: I1123 14:58:54.390251 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltj6g"] Nov 23 14:58:54 crc kubenswrapper[4718]: W1123 14:58:54.625546 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f2a3b6b_391d_472b_9410_aa57c7a06c1e.slice/crio-e0e1aa02ec127ccf40af8f78397d2d2a92e531a05b003f8da72082fcb0e1632f WatchSource:0}: Error finding container e0e1aa02ec127ccf40af8f78397d2d2a92e531a05b003f8da72082fcb0e1632f: Status 404 returned error can't find the container with id e0e1aa02ec127ccf40af8f78397d2d2a92e531a05b003f8da72082fcb0e1632f Nov 23 14:58:54 crc kubenswrapper[4718]: E1123 14:58:54.717098 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 23 14:58:54 crc kubenswrapper[4718]: E1123 14:58:54.717424 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxfbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb_openstack-operators(76e89747-f3cb-45cd-beff-22193095b455): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 14:58:54 crc kubenswrapper[4718]: E1123 14:58:54.718652 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" podUID="76e89747-f3cb-45cd-beff-22193095b455" Nov 23 14:58:55 crc kubenswrapper[4718]: I1123 14:58:55.071759 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmb9d"] Nov 23 14:58:55 crc kubenswrapper[4718]: W1123 14:58:55.297912 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf66c151c_281c_4da0_b977_d0fa8b18ba33.slice/crio-abef1639e7f15b4a8a9aa47ad6327a57a3ed8a3ab0edfc948222ce5bfbe9a6e5 WatchSource:0}: Error finding container abef1639e7f15b4a8a9aa47ad6327a57a3ed8a3ab0edfc948222ce5bfbe9a6e5: Status 404 returned error can't find the container with id abef1639e7f15b4a8a9aa47ad6327a57a3ed8a3ab0edfc948222ce5bfbe9a6e5 Nov 23 14:58:55 crc kubenswrapper[4718]: I1123 14:58:55.573487 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmb9d" event={"ID":"f66c151c-281c-4da0-b977-d0fa8b18ba33","Type":"ContainerStarted","Data":"abef1639e7f15b4a8a9aa47ad6327a57a3ed8a3ab0edfc948222ce5bfbe9a6e5"} Nov 23 14:58:55 crc kubenswrapper[4718]: I1123 14:58:55.577839 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" event={"ID":"b5aad852-89aa-459e-8771-50ef010620ef","Type":"ContainerStarted","Data":"a9cf2074af670aedd747f3685d4b9247cf5f06d54fd645474c61b508ce6c2804"} Nov 23 14:58:55 crc kubenswrapper[4718]: I1123 14:58:55.583796 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltj6g" event={"ID":"3f2a3b6b-391d-472b-9410-aa57c7a06c1e","Type":"ContainerStarted","Data":"e0e1aa02ec127ccf40af8f78397d2d2a92e531a05b003f8da72082fcb0e1632f"} Nov 23 14:58:55 crc kubenswrapper[4718]: I1123 14:58:55.590550 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" event={"ID":"e402a0ac-7f35-4bab-9948-b664c0ef9636","Type":"ContainerStarted","Data":"8d4bc1cc03b976e6d696a00d073f317bad290db9b6a58e6f57039b058762ca67"} Nov 23 14:58:55 crc kubenswrapper[4718]: E1123 14:58:55.659508 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" podUID="76e89747-f3cb-45cd-beff-22193095b455" Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.640475 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" event={"ID":"f4618913-9a14-4f47-89ec-9c4b0a931434","Type":"ContainerStarted","Data":"648b4a20fe72a9570e33b5f3fbe7cf64f686bfc3bdcd54512998fc2bc64862f7"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.653022 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" event={"ID":"934a178a-2178-4c2d-bda8-9bb817f78644","Type":"ContainerStarted","Data":"f46185c3ea4d91ea7a334f8cd0f1126a5dde1ce1c9db80adfbe95d2b656814f8"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.659668 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" event={"ID":"ce59b10a-2110-44a2-9489-b1e06f6a1032","Type":"ContainerStarted","Data":"6d44586ee22b94c951b07ccd01be4b6c94940b5414eddaea4fc79350ec7ddf59"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.665143 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" event={"ID":"968c85cb-d53b-40e8-9651-7127fc58f61a","Type":"ContainerStarted","Data":"4b8c4f171e88b584de7fd614e6ee69041d66ac6064560a39f48ae8a62bc75305"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.669463 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" event={"ID":"7cc68ab9-c26a-437b-adcd-977eb063fe25","Type":"ContainerStarted","Data":"eae82ca5e4ebbf62e012d401734246117e9ff3bfa19c094a63b7a983d02b9fca"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.671770 4718 generic.go:334] "Generic (PLEG): container finished" podID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerID="da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58" exitCode=0 Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.671829 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltj6g" event={"ID":"3f2a3b6b-391d-472b-9410-aa57c7a06c1e","Type":"ContainerDied","Data":"da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.681427 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" event={"ID":"b72e1603-d77f-4edc-87a2-3cc5469620fe","Type":"ContainerStarted","Data":"b915e68f78b56553726a7cf45a6d71e42754ffef85604c997e9e0c035faf2e50"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.690044 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" event={"ID":"1b5f1764-1a63-4fda-988c-49a8bc17fe79","Type":"ContainerStarted","Data":"df9e8b354cdd7a0a95b939778a0952f37ec337db2325dcf082860042270d176b"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.699533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" event={"ID":"06302b9c-68a3-4b48-88d7-cc0885ca0156","Type":"ContainerStarted","Data":"9bcebe6ee8e25073b0e2f085395b64903d866cf6c44a76324c0921a348b22473"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.710466 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" event={"ID":"020fa89c-9d76-439c-aee1-0843636d4469","Type":"ContainerStarted","Data":"5b5cc7df03622f45abd2cadbe80dc89573556ba782eac65a59cf4104757ef7dd"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.721811 4718 generic.go:334] "Generic (PLEG): container finished" podID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerID="c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12" exitCode=0 Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.722032 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmb9d" event={"ID":"f66c151c-281c-4da0-b977-d0fa8b18ba33","Type":"ContainerDied","Data":"c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.763008 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" event={"ID":"0d78d642-939c-47e3-8d60-665dff178d44","Type":"ContainerStarted","Data":"6644962bc8351f4db7c0b98cbf82540dd09b356fee5abc7f206e229f49bea062"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.768875 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" event={"ID":"960d1cfd-fc93-466c-8590-723c68c0bc05","Type":"ContainerStarted","Data":"05b903500f7254f901f891d571e26b08c9f8e4474e0b22a7e15c4d02eafc8b96"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.770089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" event={"ID":"e402a0ac-7f35-4bab-9948-b664c0ef9636","Type":"ContainerStarted","Data":"62ae8f270bf3a62e0b00a34b6aacef623de74323348228e752f9c25a2d5f7e52"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.770259 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.794308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" event={"ID":"0b0e1ffa-6dff-4523-911c-ad0744bd9153","Type":"ContainerStarted","Data":"e62bbf3f54daa5d0d56644bb57f1ef008a9d179ce615fb188e80a3bd87c399cf"} Nov 23 14:58:56 crc kubenswrapper[4718]: I1123 14:58:56.827419 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" podStartSLOduration=7.287895742 podStartE2EDuration="20.827396447s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:37.498589167 +0000 UTC m=+768.738209011" lastFinishedPulling="2025-11-23 14:58:51.038089872 +0000 UTC m=+782.277709716" observedRunningTime="2025-11-23 14:58:56.787285968 +0000 UTC m=+788.026905812" watchObservedRunningTime="2025-11-23 14:58:56.827396447 +0000 UTC m=+788.067016291" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.844808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" event={"ID":"306074ad-d60e-41a2-975b-901d8874be23","Type":"ContainerStarted","Data":"9b3db891906c8ed7c6a05783d94ea2a2107bea7f658ef32c8c34e3351514f16d"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.845518 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" event={"ID":"306074ad-d60e-41a2-975b-901d8874be23","Type":"ContainerStarted","Data":"d0cec8b70b5a142098c462a32946020a18a913bac35a4277896e5962791843ba"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.845561 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.854059 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" event={"ID":"f4618913-9a14-4f47-89ec-9c4b0a931434","Type":"ContainerStarted","Data":"c0deb82a3f07bbf33bf8f50ae6f602afa7f946c4f1d5595ad4a0ddefbe23db95"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.854775 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.881530 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" podStartSLOduration=5.954221343 podStartE2EDuration="21.88150586s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.717064343 +0000 UTC m=+769.956684187" lastFinishedPulling="2025-11-23 14:58:54.64434886 +0000 UTC m=+785.883968704" observedRunningTime="2025-11-23 14:58:57.876632897 +0000 UTC m=+789.116252741" watchObservedRunningTime="2025-11-23 14:58:57.88150586 +0000 UTC m=+789.121125704" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.884672 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" event={"ID":"0d78d642-939c-47e3-8d60-665dff178d44","Type":"ContainerStarted","Data":"ec6c624e83dd966546e97753295929d39f02e17a23441e7ab3aad80fe4707e23"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.884833 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.896075 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" event={"ID":"960d1cfd-fc93-466c-8590-723c68c0bc05","Type":"ContainerStarted","Data":"b98673e6cc51563e5cbf8aa0c609b7f5d709bbe49601a60749536b8d083ecc0d"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.896222 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.911876 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" podStartSLOduration=6.128892785 podStartE2EDuration="21.911858214s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.185282719 +0000 UTC m=+769.424902563" lastFinishedPulling="2025-11-23 14:58:53.968248138 +0000 UTC m=+785.207867992" observedRunningTime="2025-11-23 14:58:57.895874419 +0000 UTC m=+789.135494253" watchObservedRunningTime="2025-11-23 14:58:57.911858214 +0000 UTC m=+789.151478058" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.926786 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" podStartSLOduration=6.024841908 podStartE2EDuration="21.926761288s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.723746904 +0000 UTC m=+769.963366748" lastFinishedPulling="2025-11-23 14:58:54.625666284 +0000 UTC m=+785.865286128" observedRunningTime="2025-11-23 14:58:57.923622403 +0000 UTC m=+789.163242247" watchObservedRunningTime="2025-11-23 14:58:57.926761288 +0000 UTC m=+789.166381132" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.929013 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" event={"ID":"ce59b10a-2110-44a2-9489-b1e06f6a1032","Type":"ContainerStarted","Data":"d46ae1cb1d796dccb53894b4adc2ed385667e246e766382c4d055d9abbcb75b3"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.929690 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.938699 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" event={"ID":"ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d","Type":"ContainerStarted","Data":"e7f4f85f9c5725616111a84dc0711ceeaf4538be5f9ca8b9e72d193ab6b248e0"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.939564 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.944346 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" event={"ID":"020fa89c-9d76-439c-aee1-0843636d4469","Type":"ContainerStarted","Data":"76b95d1c3d911a793a969f32d52a238b95cf3ea05247b3394f8c7fa0f42cf0be"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.944489 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.947135 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" event={"ID":"934a178a-2178-4c2d-bda8-9bb817f78644","Type":"ContainerStarted","Data":"0beea84855b8b50492cbac59f890f01c131714805488c8b47f78805e0317b2e3"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.948681 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.960296 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" event={"ID":"b72e1603-d77f-4edc-87a2-3cc5469620fe","Type":"ContainerStarted","Data":"0e4a49e89233963e04087a380f5b3d76c1fc44c674b99adc697065a8f4956d10"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.961038 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.967579 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" podStartSLOduration=6.243649377 podStartE2EDuration="21.967551825s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.186523641 +0000 UTC m=+769.426143485" lastFinishedPulling="2025-11-23 14:58:53.910426079 +0000 UTC m=+785.150045933" observedRunningTime="2025-11-23 14:58:57.947975193 +0000 UTC m=+789.187595037" watchObservedRunningTime="2025-11-23 14:58:57.967551825 +0000 UTC m=+789.207171669" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.968182 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" event={"ID":"7cc68ab9-c26a-437b-adcd-977eb063fe25","Type":"ContainerStarted","Data":"df0e89c010cd1f260ac6b25639a2b9cad87b30cd1916553c056070e4cdd9c018"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.968330 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.973507 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" event={"ID":"b5aad852-89aa-459e-8771-50ef010620ef","Type":"ContainerStarted","Data":"48fcfd251831ffb62bc03da18c03022f3feacbbca1190d9dd44436e3dec7b65e"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.973730 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.985186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" event={"ID":"0b0e1ffa-6dff-4523-911c-ad0744bd9153","Type":"ContainerStarted","Data":"f5238ff484667bbe1cff5d5b5b6271667584e11ea4bee2930b0b913dc826799d"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.986055 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.991851 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" event={"ID":"968c85cb-d53b-40e8-9651-7127fc58f61a","Type":"ContainerStarted","Data":"d4e79f6d6ffa160f70cddb0032f7a2f589769c8b92a68df348deaaae7acd9be4"} Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.992105 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" Nov 23 14:58:57 crc kubenswrapper[4718]: I1123 14:58:57.993508 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" podStartSLOduration=5.052716037 podStartE2EDuration="21.993493629s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.801402218 +0000 UTC m=+770.041022072" lastFinishedPulling="2025-11-23 14:58:55.74217982 +0000 UTC m=+786.981799664" observedRunningTime="2025-11-23 14:58:57.978393349 +0000 UTC m=+789.218013193" watchObservedRunningTime="2025-11-23 14:58:57.993493629 +0000 UTC m=+789.233113473" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:57.998516 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" event={"ID":"1b5f1764-1a63-4fda-988c-49a8bc17fe79","Type":"ContainerStarted","Data":"54b16b2b47f3b087ed793238f4bfba7a32848b025d5fdda8c1fc2629b01e7a80"} Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:57.999391 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.005045 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" podStartSLOduration=6.110620627 podStartE2EDuration="22.005023622s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.732115221 +0000 UTC m=+769.971735065" lastFinishedPulling="2025-11-23 14:58:54.626518216 +0000 UTC m=+785.866138060" observedRunningTime="2025-11-23 14:58:57.995078202 +0000 UTC m=+789.234698046" watchObservedRunningTime="2025-11-23 14:58:58.005023622 +0000 UTC m=+789.244643466" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.014140 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" event={"ID":"06302b9c-68a3-4b48-88d7-cc0885ca0156","Type":"ContainerStarted","Data":"fb12f807099d7c3d67001e530b7f723232223ddab96ece746120cc80b6edb40e"} Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.014917 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.021981 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" podStartSLOduration=6.870511772 podStartE2EDuration="22.021958542s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.758485995 +0000 UTC m=+769.998105839" lastFinishedPulling="2025-11-23 14:58:53.909932755 +0000 UTC m=+785.149552609" observedRunningTime="2025-11-23 14:58:58.018503578 +0000 UTC m=+789.258123422" watchObservedRunningTime="2025-11-23 14:58:58.021958542 +0000 UTC m=+789.261578386" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.048722 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" podStartSLOduration=6.204412166 podStartE2EDuration="22.048698787s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.123652579 +0000 UTC m=+769.363272423" lastFinishedPulling="2025-11-23 14:58:53.96793919 +0000 UTC m=+785.207559044" observedRunningTime="2025-11-23 14:58:58.038901211 +0000 UTC m=+789.278521055" watchObservedRunningTime="2025-11-23 14:58:58.048698787 +0000 UTC m=+789.288318641" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.067190 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" podStartSLOduration=6.103701853 podStartE2EDuration="22.067171859s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.675231731 +0000 UTC m=+769.914851575" lastFinishedPulling="2025-11-23 14:58:54.638701737 +0000 UTC m=+785.878321581" observedRunningTime="2025-11-23 14:58:58.062169483 +0000 UTC m=+789.301789327" watchObservedRunningTime="2025-11-23 14:58:58.067171859 +0000 UTC m=+789.306791703" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.088888 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" podStartSLOduration=6.17005505 podStartE2EDuration="22.088859728s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.723858637 +0000 UTC m=+769.963478471" lastFinishedPulling="2025-11-23 14:58:54.642663305 +0000 UTC m=+785.882283149" observedRunningTime="2025-11-23 14:58:58.076988566 +0000 UTC m=+789.316608420" watchObservedRunningTime="2025-11-23 14:58:58.088859728 +0000 UTC m=+789.328479572" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.110515 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" podStartSLOduration=6.869148946 podStartE2EDuration="22.110489045s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.726840728 +0000 UTC m=+769.966460572" lastFinishedPulling="2025-11-23 14:58:53.968180787 +0000 UTC m=+785.207800671" observedRunningTime="2025-11-23 14:58:58.104750389 +0000 UTC m=+789.344370243" watchObservedRunningTime="2025-11-23 14:58:58.110489045 +0000 UTC m=+789.350108889" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.131471 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" podStartSLOduration=6.359905742 podStartE2EDuration="22.131427683s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.196219714 +0000 UTC m=+769.435839558" lastFinishedPulling="2025-11-23 14:58:53.967741645 +0000 UTC m=+785.207361499" observedRunningTime="2025-11-23 14:58:58.126376546 +0000 UTC m=+789.365996390" watchObservedRunningTime="2025-11-23 14:58:58.131427683 +0000 UTC m=+789.371047527" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.148951 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" podStartSLOduration=6.025054797 podStartE2EDuration="22.148936369s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:37.785103618 +0000 UTC m=+769.024723462" lastFinishedPulling="2025-11-23 14:58:53.90898519 +0000 UTC m=+785.148605034" observedRunningTime="2025-11-23 14:58:58.148134966 +0000 UTC m=+789.387754840" watchObservedRunningTime="2025-11-23 14:58:58.148936369 +0000 UTC m=+789.388556213" Nov 23 14:58:58 crc kubenswrapper[4718]: I1123 14:58:58.172653 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" podStartSLOduration=5.65819573 podStartE2EDuration="22.172635362s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.123262948 +0000 UTC m=+769.362882792" lastFinishedPulling="2025-11-23 14:58:54.63770258 +0000 UTC m=+785.877322424" observedRunningTime="2025-11-23 14:58:58.166741572 +0000 UTC m=+789.406361416" watchObservedRunningTime="2025-11-23 14:58:58.172635362 +0000 UTC m=+789.412255216" Nov 23 14:59:00 crc kubenswrapper[4718]: I1123 14:59:00.037288 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-g72r5" Nov 23 14:59:00 crc kubenswrapper[4718]: I1123 14:59:00.056213 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" podStartSLOduration=8.807697617 podStartE2EDuration="24.056190499s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.718827381 +0000 UTC m=+769.958447225" lastFinishedPulling="2025-11-23 14:58:53.967320253 +0000 UTC m=+785.206940107" observedRunningTime="2025-11-23 14:58:58.187454854 +0000 UTC m=+789.427074698" watchObservedRunningTime="2025-11-23 14:59:00.056190499 +0000 UTC m=+791.295810343" Nov 23 14:59:00 crc kubenswrapper[4718]: I1123 14:59:00.840646 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg4x"] Nov 23 14:59:00 crc kubenswrapper[4718]: I1123 14:59:00.842116 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:00 crc kubenswrapper[4718]: I1123 14:59:00.858535 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg4x"] Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.015593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-utilities\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.015644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gth88\" (UniqueName: \"kubernetes.io/projected/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-kube-api-access-gth88\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.015722 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-catalog-content\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.116694 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-utilities\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.116760 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gth88\" (UniqueName: \"kubernetes.io/projected/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-kube-api-access-gth88\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.116812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-catalog-content\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.117185 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-utilities\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.117272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-catalog-content\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.140851 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gth88\" (UniqueName: \"kubernetes.io/projected/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-kube-api-access-gth88\") pod \"redhat-marketplace-7kg4x\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:01 crc kubenswrapper[4718]: I1123 14:59:01.162282 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:05 crc kubenswrapper[4718]: I1123 14:59:05.076523 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg4x"] Nov 23 14:59:05 crc kubenswrapper[4718]: W1123 14:59:05.079925 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2bd9d1a_b72a_43d4_9e10_579590ca19ed.slice/crio-3216b430112df33a0084812eed0c54805eee0fec4b55874aa6c1f4c29e87694a WatchSource:0}: Error finding container 3216b430112df33a0084812eed0c54805eee0fec4b55874aa6c1f4c29e87694a: Status 404 returned error can't find the container with id 3216b430112df33a0084812eed0c54805eee0fec4b55874aa6c1f4c29e87694a Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.083035 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" event={"ID":"9d46e777-1d50-42a0-b20f-a24b155a0e43","Type":"ContainerStarted","Data":"2b383b2a45718e100591e20d507d8be4e13982e5a751a5846594092c62161d23"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.083645 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.084921 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" event={"ID":"8e9db0b8-bd2b-45fa-8105-2524e81bcd70","Type":"ContainerStarted","Data":"c53e8d0b0c90a87e1d5d14968b5efe5ed1890b586f4d073a1d3d3df883a6cc87"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.085129 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.086465 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" event={"ID":"313f2889-e11a-440a-8358-612780f4a348","Type":"ContainerStarted","Data":"da9c418e317ef54b228c19ccf39458b4a1cb9c088de11365172a7b0124f69e13"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.086643 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.087971 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmb9d" event={"ID":"f66c151c-281c-4da0-b977-d0fa8b18ba33","Type":"ContainerStarted","Data":"9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.089801 4718 generic.go:334] "Generic (PLEG): container finished" podID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerID="011b1d820f502e4344daa7b73dda771e5064a8716249d918a8eb1e1ef9bc21ce" exitCode=0 Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.089862 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg4x" event={"ID":"f2bd9d1a-b72a-43d4-9e10-579590ca19ed","Type":"ContainerDied","Data":"011b1d820f502e4344daa7b73dda771e5064a8716249d918a8eb1e1ef9bc21ce"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.089885 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg4x" event={"ID":"f2bd9d1a-b72a-43d4-9e10-579590ca19ed","Type":"ContainerStarted","Data":"3216b430112df33a0084812eed0c54805eee0fec4b55874aa6c1f4c29e87694a"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.092308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" event={"ID":"424e5dfa-98a5-480c-aeb9-8f279b2fdee4","Type":"ContainerStarted","Data":"4ff6294792d3a41ce231f696e5af85983e35eb3a2a831dcff66a3cb1f1ec8dbe"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.092610 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.095623 4718 generic.go:334] "Generic (PLEG): container finished" podID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerID="bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082" exitCode=0 Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.095691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltj6g" event={"ID":"3f2a3b6b-391d-472b-9410-aa57c7a06c1e","Type":"ContainerDied","Data":"bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.098372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" event={"ID":"e3358e41-4842-4768-8235-96a8166d43b0","Type":"ContainerStarted","Data":"6ceacaf558bd5fe23c6b20ffebc116084720f2e7d417d5173b79532dfacc8f2f"} Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.098880 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.125590 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" podStartSLOduration=4.525832515 podStartE2EDuration="30.125562805s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.990387737 +0000 UTC m=+770.230007581" lastFinishedPulling="2025-11-23 14:59:04.590118017 +0000 UTC m=+795.829737871" observedRunningTime="2025-11-23 14:59:06.104335279 +0000 UTC m=+797.343955133" watchObservedRunningTime="2025-11-23 14:59:06.125562805 +0000 UTC m=+797.365182679" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.148343 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" podStartSLOduration=4.317879571 podStartE2EDuration="30.148325563s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.982157004 +0000 UTC m=+770.221776848" lastFinishedPulling="2025-11-23 14:59:04.812602996 +0000 UTC m=+796.052222840" observedRunningTime="2025-11-23 14:59:06.126024897 +0000 UTC m=+797.365644751" watchObservedRunningTime="2025-11-23 14:59:06.148325563 +0000 UTC m=+797.387945417" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.152129 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" podStartSLOduration=4.359919101 podStartE2EDuration="30.152115576s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.99455588 +0000 UTC m=+770.234175724" lastFinishedPulling="2025-11-23 14:59:04.786752355 +0000 UTC m=+796.026372199" observedRunningTime="2025-11-23 14:59:06.146537845 +0000 UTC m=+797.386157699" watchObservedRunningTime="2025-11-23 14:59:06.152115576 +0000 UTC m=+797.391735440" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.166158 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" podStartSLOduration=4.49879549 podStartE2EDuration="30.166138627s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:39.007706716 +0000 UTC m=+770.247326560" lastFinishedPulling="2025-11-23 14:59:04.675049853 +0000 UTC m=+795.914669697" observedRunningTime="2025-11-23 14:59:06.165058007 +0000 UTC m=+797.404677921" watchObservedRunningTime="2025-11-23 14:59:06.166138627 +0000 UTC m=+797.405758471" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.183745 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" podStartSLOduration=4.270085385 podStartE2EDuration="30.183723133s" podCreationTimestamp="2025-11-23 14:58:36 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.761498637 +0000 UTC m=+770.001118471" lastFinishedPulling="2025-11-23 14:59:04.675136365 +0000 UTC m=+795.914756219" observedRunningTime="2025-11-23 14:59:06.178090181 +0000 UTC m=+797.417710025" watchObservedRunningTime="2025-11-23 14:59:06.183723133 +0000 UTC m=+797.423342987" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.654261 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-kwnhv" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.671247 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-k82wv" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.722987 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-v5dn9" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.734813 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-k88hn" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.770508 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-h4hzc" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.781193 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-8trcx" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.833976 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-rw4vq" Nov 23 14:59:06 crc kubenswrapper[4718]: I1123 14:59:06.906575 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-4g92d" Nov 23 14:59:07 crc kubenswrapper[4718]: I1123 14:59:07.043359 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-47csv" Nov 23 14:59:07 crc kubenswrapper[4718]: I1123 14:59:07.107917 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-6785w" Nov 23 14:59:07 crc kubenswrapper[4718]: I1123 14:59:07.246586 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-q9nlr" Nov 23 14:59:07 crc kubenswrapper[4718]: I1123 14:59:07.321997 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-kpg5q" Nov 23 14:59:07 crc kubenswrapper[4718]: I1123 14:59:07.360995 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-5wtcx" Nov 23 14:59:07 crc kubenswrapper[4718]: I1123 14:59:07.402025 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-h8b4l" Nov 23 14:59:07 crc kubenswrapper[4718]: I1123 14:59:07.770242 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk" Nov 23 14:59:08 crc kubenswrapper[4718]: I1123 14:59:08.111102 4718 generic.go:334] "Generic (PLEG): container finished" podID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerID="9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb" exitCode=0 Nov 23 14:59:08 crc kubenswrapper[4718]: I1123 14:59:08.111160 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmb9d" event={"ID":"f66c151c-281c-4da0-b977-d0fa8b18ba33","Type":"ContainerDied","Data":"9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb"} Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.576318 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hw56v"] Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.586259 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.592929 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hw56v"] Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.695928 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-utilities\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.696374 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-catalog-content\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.696661 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqch\" (UniqueName: \"kubernetes.io/projected/37b4d59b-6496-409e-94c3-b72f3261ba3e-kube-api-access-4qqch\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.798264 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqch\" (UniqueName: \"kubernetes.io/projected/37b4d59b-6496-409e-94c3-b72f3261ba3e-kube-api-access-4qqch\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.798363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-utilities\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.798390 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-catalog-content\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.798874 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-catalog-content\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.799047 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-utilities\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.839394 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqch\" (UniqueName: \"kubernetes.io/projected/37b4d59b-6496-409e-94c3-b72f3261ba3e-kube-api-access-4qqch\") pod \"certified-operators-hw56v\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:12 crc kubenswrapper[4718]: I1123 14:59:12.923282 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:13 crc kubenswrapper[4718]: I1123 14:59:13.386670 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hw56v"] Nov 23 14:59:13 crc kubenswrapper[4718]: W1123 14:59:13.391816 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37b4d59b_6496_409e_94c3_b72f3261ba3e.slice/crio-1cf33b50a982768b4d36ad917e69d3a35393fde1ad87531b9a29ddabfd832dc0 WatchSource:0}: Error finding container 1cf33b50a982768b4d36ad917e69d3a35393fde1ad87531b9a29ddabfd832dc0: Status 404 returned error can't find the container with id 1cf33b50a982768b4d36ad917e69d3a35393fde1ad87531b9a29ddabfd832dc0 Nov 23 14:59:14 crc kubenswrapper[4718]: I1123 14:59:14.158460 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw56v" event={"ID":"37b4d59b-6496-409e-94c3-b72f3261ba3e","Type":"ContainerStarted","Data":"1cf33b50a982768b4d36ad917e69d3a35393fde1ad87531b9a29ddabfd832dc0"} Nov 23 14:59:17 crc kubenswrapper[4718]: I1123 14:59:17.432811 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-7wn4t" Nov 23 14:59:17 crc kubenswrapper[4718]: I1123 14:59:17.502082 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-p7dq2" Nov 23 14:59:17 crc kubenswrapper[4718]: I1123 14:59:17.638525 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-qtpps" Nov 23 14:59:17 crc kubenswrapper[4718]: I1123 14:59:17.687871 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kt77s" Nov 23 14:59:17 crc kubenswrapper[4718]: I1123 14:59:17.699978 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-bfnhk" Nov 23 14:59:25 crc kubenswrapper[4718]: I1123 14:59:25.254656 4718 generic.go:334] "Generic (PLEG): container finished" podID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerID="201964ab9a19615d73225c9e92f01419740be98540e945abdf39df6250f0de84" exitCode=0 Nov 23 14:59:25 crc kubenswrapper[4718]: I1123 14:59:25.254881 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw56v" event={"ID":"37b4d59b-6496-409e-94c3-b72f3261ba3e","Type":"ContainerDied","Data":"201964ab9a19615d73225c9e92f01419740be98540e945abdf39df6250f0de84"} Nov 23 14:59:26 crc kubenswrapper[4718]: I1123 14:59:26.263988 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmb9d" event={"ID":"f66c151c-281c-4da0-b977-d0fa8b18ba33","Type":"ContainerStarted","Data":"77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a"} Nov 23 14:59:26 crc kubenswrapper[4718]: I1123 14:59:26.265346 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" event={"ID":"76e89747-f3cb-45cd-beff-22193095b455","Type":"ContainerStarted","Data":"4745ad2b36bd9b063340bac8b64dc47f8b7bcb4edb77596b3d025287e36a72f4"} Nov 23 14:59:26 crc kubenswrapper[4718]: I1123 14:59:26.268040 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg4x" event={"ID":"f2bd9d1a-b72a-43d4-9e10-579590ca19ed","Type":"ContainerStarted","Data":"b417cd5e80c3a9cc8fa407f8ba3392a9be84c75092ed01b8f0cdb3bece48e171"} Nov 23 14:59:26 crc kubenswrapper[4718]: I1123 14:59:26.270520 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltj6g" event={"ID":"3f2a3b6b-391d-472b-9410-aa57c7a06c1e","Type":"ContainerStarted","Data":"7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35"} Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.280098 4718 generic.go:334] "Generic (PLEG): container finished" podID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerID="b417cd5e80c3a9cc8fa407f8ba3392a9be84c75092ed01b8f0cdb3bece48e171" exitCode=0 Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.280216 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg4x" event={"ID":"f2bd9d1a-b72a-43d4-9e10-579590ca19ed","Type":"ContainerDied","Data":"b417cd5e80c3a9cc8fa407f8ba3392a9be84c75092ed01b8f0cdb3bece48e171"} Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.282642 4718 generic.go:334] "Generic (PLEG): container finished" podID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerID="9deb769d0624609442651dfee6d46b673d7b998aecc3bd73f3bbb64429ae3330" exitCode=0 Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.283728 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw56v" event={"ID":"37b4d59b-6496-409e-94c3-b72f3261ba3e","Type":"ContainerDied","Data":"9deb769d0624609442651dfee6d46b673d7b998aecc3bd73f3bbb64429ae3330"} Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.330048 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmb9d" podStartSLOduration=11.341603777 podStartE2EDuration="40.330027405s" podCreationTimestamp="2025-11-23 14:58:47 +0000 UTC" firstStartedPulling="2025-11-23 14:58:56.779784154 +0000 UTC m=+788.019403998" lastFinishedPulling="2025-11-23 14:59:25.768207742 +0000 UTC m=+817.007827626" observedRunningTime="2025-11-23 14:59:27.324784974 +0000 UTC m=+818.564404828" watchObservedRunningTime="2025-11-23 14:59:27.330027405 +0000 UTC m=+818.569647249" Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.347526 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ltj6g" podStartSLOduration=15.233449212 podStartE2EDuration="44.34750543s" podCreationTimestamp="2025-11-23 14:58:43 +0000 UTC" firstStartedPulling="2025-11-23 14:58:56.7840318 +0000 UTC m=+788.023651654" lastFinishedPulling="2025-11-23 14:59:25.898088028 +0000 UTC m=+817.137707872" observedRunningTime="2025-11-23 14:59:27.343782029 +0000 UTC m=+818.583401893" watchObservedRunningTime="2025-11-23 14:59:27.34750543 +0000 UTC m=+818.587125284" Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.363715 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb" podStartSLOduration=3.437843938 podStartE2EDuration="50.363694109s" podCreationTimestamp="2025-11-23 14:58:37 +0000 UTC" firstStartedPulling="2025-11-23 14:58:38.971777403 +0000 UTC m=+770.211397247" lastFinishedPulling="2025-11-23 14:59:25.897627524 +0000 UTC m=+817.137247418" observedRunningTime="2025-11-23 14:59:27.361231553 +0000 UTC m=+818.600851407" watchObservedRunningTime="2025-11-23 14:59:27.363694109 +0000 UTC m=+818.603313963" Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.946953 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:59:27 crc kubenswrapper[4718]: I1123 14:59:27.947011 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:59:28 crc kubenswrapper[4718]: I1123 14:59:28.290210 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw56v" event={"ID":"37b4d59b-6496-409e-94c3-b72f3261ba3e","Type":"ContainerStarted","Data":"b1a0243a5057e2453a7915d42cd0155e35863efd41a32bcc053e0c0899e885cf"} Nov 23 14:59:28 crc kubenswrapper[4718]: I1123 14:59:28.292154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg4x" event={"ID":"f2bd9d1a-b72a-43d4-9e10-579590ca19ed","Type":"ContainerStarted","Data":"1d4d69709938d515f386d12b6672b1fa67bb687fdf01f94df1bb0419aeb71a17"} Nov 23 14:59:28 crc kubenswrapper[4718]: I1123 14:59:28.313070 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hw56v" podStartSLOduration=13.951874467 podStartE2EDuration="16.313053618s" podCreationTimestamp="2025-11-23 14:59:12 +0000 UTC" firstStartedPulling="2025-11-23 14:59:25.690656427 +0000 UTC m=+816.930276311" lastFinishedPulling="2025-11-23 14:59:28.051835608 +0000 UTC m=+819.291455462" observedRunningTime="2025-11-23 14:59:28.311383633 +0000 UTC m=+819.551003477" watchObservedRunningTime="2025-11-23 14:59:28.313053618 +0000 UTC m=+819.552673462" Nov 23 14:59:28 crc kubenswrapper[4718]: I1123 14:59:28.343534 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7kg4x" podStartSLOduration=6.473486869 podStartE2EDuration="28.343517316s" podCreationTimestamp="2025-11-23 14:59:00 +0000 UTC" firstStartedPulling="2025-11-23 14:59:06.092328953 +0000 UTC m=+797.331948797" lastFinishedPulling="2025-11-23 14:59:27.9623594 +0000 UTC m=+819.201979244" observedRunningTime="2025-11-23 14:59:28.340937715 +0000 UTC m=+819.580557569" watchObservedRunningTime="2025-11-23 14:59:28.343517316 +0000 UTC m=+819.583137160" Nov 23 14:59:28 crc kubenswrapper[4718]: I1123 14:59:28.989784 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nmb9d" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="registry-server" probeResult="failure" output=< Nov 23 14:59:28 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Nov 23 14:59:28 crc kubenswrapper[4718]: > Nov 23 14:59:31 crc kubenswrapper[4718]: I1123 14:59:31.163177 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:31 crc kubenswrapper[4718]: I1123 14:59:31.163249 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:31 crc kubenswrapper[4718]: I1123 14:59:31.214293 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:32 crc kubenswrapper[4718]: I1123 14:59:32.923910 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:32 crc kubenswrapper[4718]: I1123 14:59:32.924340 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:32 crc kubenswrapper[4718]: I1123 14:59:32.970177 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:33 crc kubenswrapper[4718]: I1123 14:59:33.418051 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:33 crc kubenswrapper[4718]: I1123 14:59:33.606573 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:59:33 crc kubenswrapper[4718]: I1123 14:59:33.606655 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:59:33 crc kubenswrapper[4718]: I1123 14:59:33.665978 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:59:34 crc kubenswrapper[4718]: I1123 14:59:34.433960 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:59:35 crc kubenswrapper[4718]: I1123 14:59:35.320406 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hw56v"] Nov 23 14:59:35 crc kubenswrapper[4718]: I1123 14:59:35.365351 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hw56v" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="registry-server" containerID="cri-o://b1a0243a5057e2453a7915d42cd0155e35863efd41a32bcc053e0c0899e885cf" gracePeriod=2 Nov 23 14:59:35 crc kubenswrapper[4718]: I1123 14:59:35.927227 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltj6g"] Nov 23 14:59:36 crc kubenswrapper[4718]: I1123 14:59:36.374301 4718 generic.go:334] "Generic (PLEG): container finished" podID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerID="b1a0243a5057e2453a7915d42cd0155e35863efd41a32bcc053e0c0899e885cf" exitCode=0 Nov 23 14:59:36 crc kubenswrapper[4718]: I1123 14:59:36.374338 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw56v" event={"ID":"37b4d59b-6496-409e-94c3-b72f3261ba3e","Type":"ContainerDied","Data":"b1a0243a5057e2453a7915d42cd0155e35863efd41a32bcc053e0c0899e885cf"} Nov 23 14:59:36 crc kubenswrapper[4718]: I1123 14:59:36.374510 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ltj6g" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="registry-server" containerID="cri-o://7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35" gracePeriod=2 Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.055420 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.061037 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.183218 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-utilities\") pod \"37b4d59b-6496-409e-94c3-b72f3261ba3e\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.183618 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-utilities\") pod \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.183702 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqch\" (UniqueName: \"kubernetes.io/projected/37b4d59b-6496-409e-94c3-b72f3261ba3e-kube-api-access-4qqch\") pod \"37b4d59b-6496-409e-94c3-b72f3261ba3e\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.184536 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-utilities" (OuterVolumeSpecName: "utilities") pod "3f2a3b6b-391d-472b-9410-aa57c7a06c1e" (UID: "3f2a3b6b-391d-472b-9410-aa57c7a06c1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.184830 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxchc\" (UniqueName: \"kubernetes.io/projected/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-kube-api-access-dxchc\") pod \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.184908 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-catalog-content\") pod \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\" (UID: \"3f2a3b6b-391d-472b-9410-aa57c7a06c1e\") " Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.184834 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-utilities" (OuterVolumeSpecName: "utilities") pod "37b4d59b-6496-409e-94c3-b72f3261ba3e" (UID: "37b4d59b-6496-409e-94c3-b72f3261ba3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.184977 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-catalog-content\") pod \"37b4d59b-6496-409e-94c3-b72f3261ba3e\" (UID: \"37b4d59b-6496-409e-94c3-b72f3261ba3e\") " Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.185413 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.185431 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.189187 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b4d59b-6496-409e-94c3-b72f3261ba3e-kube-api-access-4qqch" (OuterVolumeSpecName: "kube-api-access-4qqch") pod "37b4d59b-6496-409e-94c3-b72f3261ba3e" (UID: "37b4d59b-6496-409e-94c3-b72f3261ba3e"). InnerVolumeSpecName "kube-api-access-4qqch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.189617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-kube-api-access-dxchc" (OuterVolumeSpecName: "kube-api-access-dxchc") pod "3f2a3b6b-391d-472b-9410-aa57c7a06c1e" (UID: "3f2a3b6b-391d-472b-9410-aa57c7a06c1e"). InnerVolumeSpecName "kube-api-access-dxchc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.254193 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f2a3b6b-391d-472b-9410-aa57c7a06c1e" (UID: "3f2a3b6b-391d-472b-9410-aa57c7a06c1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.260566 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37b4d59b-6496-409e-94c3-b72f3261ba3e" (UID: "37b4d59b-6496-409e-94c3-b72f3261ba3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.286672 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqch\" (UniqueName: \"kubernetes.io/projected/37b4d59b-6496-409e-94c3-b72f3261ba3e-kube-api-access-4qqch\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.286704 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxchc\" (UniqueName: \"kubernetes.io/projected/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-kube-api-access-dxchc\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.286714 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2a3b6b-391d-472b-9410-aa57c7a06c1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.286722 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b4d59b-6496-409e-94c3-b72f3261ba3e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.385853 4718 generic.go:334] "Generic (PLEG): container finished" podID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerID="7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35" exitCode=0 Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.385921 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltj6g" event={"ID":"3f2a3b6b-391d-472b-9410-aa57c7a06c1e","Type":"ContainerDied","Data":"7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35"} Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.385952 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltj6g" event={"ID":"3f2a3b6b-391d-472b-9410-aa57c7a06c1e","Type":"ContainerDied","Data":"e0e1aa02ec127ccf40af8f78397d2d2a92e531a05b003f8da72082fcb0e1632f"} Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.385972 4718 scope.go:117] "RemoveContainer" containerID="7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.386106 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltj6g" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.396519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw56v" event={"ID":"37b4d59b-6496-409e-94c3-b72f3261ba3e","Type":"ContainerDied","Data":"1cf33b50a982768b4d36ad917e69d3a35393fde1ad87531b9a29ddabfd832dc0"} Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.396592 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw56v" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.422676 4718 scope.go:117] "RemoveContainer" containerID="bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.431151 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltj6g"] Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.436653 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ltj6g"] Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.532798 4718 scope.go:117] "RemoveContainer" containerID="da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.558825 4718 scope.go:117] "RemoveContainer" containerID="7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35" Nov 23 14:59:37 crc kubenswrapper[4718]: E1123 14:59:37.559127 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35\": container with ID starting with 7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35 not found: ID does not exist" containerID="7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.559156 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35"} err="failed to get container status \"7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35\": rpc error: code = NotFound desc = could not find container \"7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35\": container with ID starting with 7517a16ce7990c0a5a423e73c5cfd08da394b67b345dadcb13ceffbf2b64ad35 not found: ID does not exist" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.559178 4718 scope.go:117] "RemoveContainer" containerID="bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082" Nov 23 14:59:37 crc kubenswrapper[4718]: E1123 14:59:37.559412 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082\": container with ID starting with bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082 not found: ID does not exist" containerID="bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.559431 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082"} err="failed to get container status \"bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082\": rpc error: code = NotFound desc = could not find container \"bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082\": container with ID starting with bb7c14a22a252c2398e6da1667984b7f4494f21e12460545323430bf80f49082 not found: ID does not exist" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.559459 4718 scope.go:117] "RemoveContainer" containerID="da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58" Nov 23 14:59:37 crc kubenswrapper[4718]: E1123 14:59:37.560173 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58\": container with ID starting with da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58 not found: ID does not exist" containerID="da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.560194 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58"} err="failed to get container status \"da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58\": rpc error: code = NotFound desc = could not find container \"da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58\": container with ID starting with da76a94aed6e55ea30c7e6383328efa60d21e4ac7bd3fe4961814384d3199d58 not found: ID does not exist" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.560208 4718 scope.go:117] "RemoveContainer" containerID="b1a0243a5057e2453a7915d42cd0155e35863efd41a32bcc053e0c0899e885cf" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.585631 4718 scope.go:117] "RemoveContainer" containerID="9deb769d0624609442651dfee6d46b673d7b998aecc3bd73f3bbb64429ae3330" Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.605103 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hw56v"] Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.605175 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hw56v"] Nov 23 14:59:37 crc kubenswrapper[4718]: I1123 14:59:37.605845 4718 scope.go:117] "RemoveContainer" containerID="201964ab9a19615d73225c9e92f01419740be98540e945abdf39df6250f0de84" Nov 23 14:59:38 crc kubenswrapper[4718]: I1123 14:59:38.010896 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:59:38 crc kubenswrapper[4718]: I1123 14:59:38.061737 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:59:38 crc kubenswrapper[4718]: I1123 14:59:38.452291 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" path="/var/lib/kubelet/pods/37b4d59b-6496-409e-94c3-b72f3261ba3e/volumes" Nov 23 14:59:38 crc kubenswrapper[4718]: I1123 14:59:38.453383 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" path="/var/lib/kubelet/pods/3f2a3b6b-391d-472b-9410-aa57c7a06c1e/volumes" Nov 23 14:59:41 crc kubenswrapper[4718]: I1123 14:59:41.200772 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:41 crc kubenswrapper[4718]: I1123 14:59:41.721258 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmb9d"] Nov 23 14:59:41 crc kubenswrapper[4718]: I1123 14:59:41.721608 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmb9d" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="registry-server" containerID="cri-o://77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a" gracePeriod=2 Nov 23 14:59:41 crc kubenswrapper[4718]: E1123 14:59:41.939312 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf66c151c_281c_4da0_b977_d0fa8b18ba33.slice/crio-conmon-77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a.scope\": RecentStats: unable to find data in memory cache]" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.168499 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.356757 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p597\" (UniqueName: \"kubernetes.io/projected/f66c151c-281c-4da0-b977-d0fa8b18ba33-kube-api-access-2p597\") pod \"f66c151c-281c-4da0-b977-d0fa8b18ba33\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.357117 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-utilities\") pod \"f66c151c-281c-4da0-b977-d0fa8b18ba33\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.357141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-catalog-content\") pod \"f66c151c-281c-4da0-b977-d0fa8b18ba33\" (UID: \"f66c151c-281c-4da0-b977-d0fa8b18ba33\") " Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.357948 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-utilities" (OuterVolumeSpecName: "utilities") pod "f66c151c-281c-4da0-b977-d0fa8b18ba33" (UID: "f66c151c-281c-4da0-b977-d0fa8b18ba33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.358255 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.361972 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66c151c-281c-4da0-b977-d0fa8b18ba33-kube-api-access-2p597" (OuterVolumeSpecName: "kube-api-access-2p597") pod "f66c151c-281c-4da0-b977-d0fa8b18ba33" (UID: "f66c151c-281c-4da0-b977-d0fa8b18ba33"). InnerVolumeSpecName "kube-api-access-2p597". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.444225 4718 generic.go:334] "Generic (PLEG): container finished" podID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerID="77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a" exitCode=0 Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.444357 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmb9d" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.453457 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f66c151c-281c-4da0-b977-d0fa8b18ba33" (UID: "f66c151c-281c-4da0-b977-d0fa8b18ba33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.459613 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66c151c-281c-4da0-b977-d0fa8b18ba33-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.459642 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p597\" (UniqueName: \"kubernetes.io/projected/f66c151c-281c-4da0-b977-d0fa8b18ba33-kube-api-access-2p597\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.462757 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmb9d" event={"ID":"f66c151c-281c-4da0-b977-d0fa8b18ba33","Type":"ContainerDied","Data":"77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a"} Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.463018 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmb9d" event={"ID":"f66c151c-281c-4da0-b977-d0fa8b18ba33","Type":"ContainerDied","Data":"abef1639e7f15b4a8a9aa47ad6327a57a3ed8a3ab0edfc948222ce5bfbe9a6e5"} Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.463230 4718 scope.go:117] "RemoveContainer" containerID="77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.495739 4718 scope.go:117] "RemoveContainer" containerID="9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.514296 4718 scope.go:117] "RemoveContainer" containerID="c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.539370 4718 scope.go:117] "RemoveContainer" containerID="77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.539770 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a\": container with ID starting with 77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a not found: ID does not exist" containerID="77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.539808 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a"} err="failed to get container status \"77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a\": rpc error: code = NotFound desc = could not find container \"77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a\": container with ID starting with 77dfb5b05f4257457fee8903d076f977294b01fbe60cbc7ef5247065f354346a not found: ID does not exist" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.539831 4718 scope.go:117] "RemoveContainer" containerID="9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.540121 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb\": container with ID starting with 9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb not found: ID does not exist" containerID="9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.540150 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb"} err="failed to get container status \"9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb\": rpc error: code = NotFound desc = could not find container \"9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb\": container with ID starting with 9fe7f25d12304829db2df760850315fc35705094f1037f9d81a35dd48171b0eb not found: ID does not exist" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.540175 4718 scope.go:117] "RemoveContainer" containerID="c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.540789 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12\": container with ID starting with c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12 not found: ID does not exist" containerID="c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.540837 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12"} err="failed to get container status \"c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12\": rpc error: code = NotFound desc = could not find container \"c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12\": container with ID starting with c8301899d88f17ac0597c6695449ed0cee88ec8745fa44de8bedde6ebc605c12 not found: ID does not exist" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.760937 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmb9d"] Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.766100 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmb9d"] Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960293 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lfvt8"] Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960617 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="extract-utilities" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960636 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="extract-utilities" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960659 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="extract-utilities" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960666 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="extract-utilities" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960678 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="extract-content" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960685 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="extract-content" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960700 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960706 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960722 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="extract-utilities" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960728 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="extract-utilities" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960741 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960747 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960758 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="extract-content" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960764 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="extract-content" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960778 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="extract-content" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960784 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="extract-content" Nov 23 14:59:42 crc kubenswrapper[4718]: E1123 14:59:42.960794 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960800 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960938 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960953 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2a3b6b-391d-472b-9410-aa57c7a06c1e" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.960962 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b4d59b-6496-409e-94c3-b72f3261ba3e" containerName="registry-server" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.961624 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.964152 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.964356 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-txv5w" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.964874 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.971182 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lfvt8"] Nov 23 14:59:42 crc kubenswrapper[4718]: I1123 14:59:42.975171 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.018163 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6l9m5"] Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.019527 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.023135 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.027664 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6l9m5"] Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.069694 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-config\") pod \"dnsmasq-dns-675f4bcbfc-lfvt8\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.069915 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlc72\" (UniqueName: \"kubernetes.io/projected/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-kube-api-access-nlc72\") pod \"dnsmasq-dns-675f4bcbfc-lfvt8\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.171769 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-config\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.172061 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-config\") pod \"dnsmasq-dns-675f4bcbfc-lfvt8\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.172224 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlc72\" (UniqueName: \"kubernetes.io/projected/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-kube-api-access-nlc72\") pod \"dnsmasq-dns-675f4bcbfc-lfvt8\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.172382 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmq2r\" (UniqueName: \"kubernetes.io/projected/d6998dfc-d923-4ce3-8957-b1aab8aceba2-kube-api-access-dmq2r\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.172513 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.173129 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-config\") pod \"dnsmasq-dns-675f4bcbfc-lfvt8\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.188728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlc72\" (UniqueName: \"kubernetes.io/projected/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-kube-api-access-nlc72\") pod \"dnsmasq-dns-675f4bcbfc-lfvt8\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.273384 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmq2r\" (UniqueName: \"kubernetes.io/projected/d6998dfc-d923-4ce3-8957-b1aab8aceba2-kube-api-access-dmq2r\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.273457 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.273530 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-config\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.274543 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.274578 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.274599 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-config\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.294315 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmq2r\" (UniqueName: \"kubernetes.io/projected/d6998dfc-d923-4ce3-8957-b1aab8aceba2-kube-api-access-dmq2r\") pod \"dnsmasq-dns-78dd6ddcc-6l9m5\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.334767 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.696194 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lfvt8"] Nov 23 14:59:43 crc kubenswrapper[4718]: W1123 14:59:43.702576 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f9c74f_aa79_43c3_a1d3_883ddf1d0bb3.slice/crio-ee1f894986a6d2099342b13eb79cee467e6f9c56aa7b068628ede8f45f86fb8d WatchSource:0}: Error finding container ee1f894986a6d2099342b13eb79cee467e6f9c56aa7b068628ede8f45f86fb8d: Status 404 returned error can't find the container with id ee1f894986a6d2099342b13eb79cee467e6f9c56aa7b068628ede8f45f86fb8d Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.704140 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 14:59:43 crc kubenswrapper[4718]: I1123 14:59:43.811972 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6l9m5"] Nov 23 14:59:43 crc kubenswrapper[4718]: W1123 14:59:43.818380 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6998dfc_d923_4ce3_8957_b1aab8aceba2.slice/crio-7cecca931aee54001ae0f564f0e118e4291f8886a6b9eec644fb261c846bc3e8 WatchSource:0}: Error finding container 7cecca931aee54001ae0f564f0e118e4291f8886a6b9eec644fb261c846bc3e8: Status 404 returned error can't find the container with id 7cecca931aee54001ae0f564f0e118e4291f8886a6b9eec644fb261c846bc3e8 Nov 23 14:59:44 crc kubenswrapper[4718]: I1123 14:59:44.123490 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg4x"] Nov 23 14:59:44 crc kubenswrapper[4718]: I1123 14:59:44.123779 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7kg4x" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="registry-server" containerID="cri-o://1d4d69709938d515f386d12b6672b1fa67bb687fdf01f94df1bb0419aeb71a17" gracePeriod=2 Nov 23 14:59:44 crc kubenswrapper[4718]: I1123 14:59:44.451723 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66c151c-281c-4da0-b977-d0fa8b18ba33" path="/var/lib/kubelet/pods/f66c151c-281c-4da0-b977-d0fa8b18ba33/volumes" Nov 23 14:59:44 crc kubenswrapper[4718]: I1123 14:59:44.463972 4718 generic.go:334] "Generic (PLEG): container finished" podID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerID="1d4d69709938d515f386d12b6672b1fa67bb687fdf01f94df1bb0419aeb71a17" exitCode=0 Nov 23 14:59:44 crc kubenswrapper[4718]: I1123 14:59:44.464073 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg4x" event={"ID":"f2bd9d1a-b72a-43d4-9e10-579590ca19ed","Type":"ContainerDied","Data":"1d4d69709938d515f386d12b6672b1fa67bb687fdf01f94df1bb0419aeb71a17"} Nov 23 14:59:44 crc kubenswrapper[4718]: I1123 14:59:44.465157 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" event={"ID":"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3","Type":"ContainerStarted","Data":"ee1f894986a6d2099342b13eb79cee467e6f9c56aa7b068628ede8f45f86fb8d"} Nov 23 14:59:44 crc kubenswrapper[4718]: I1123 14:59:44.466776 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" event={"ID":"d6998dfc-d923-4ce3-8957-b1aab8aceba2","Type":"ContainerStarted","Data":"7cecca931aee54001ae0f564f0e118e4291f8886a6b9eec644fb261c846bc3e8"} Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.166461 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.333501 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-utilities\") pod \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.333551 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-catalog-content\") pod \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.333771 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gth88\" (UniqueName: \"kubernetes.io/projected/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-kube-api-access-gth88\") pod \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\" (UID: \"f2bd9d1a-b72a-43d4-9e10-579590ca19ed\") " Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.334385 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-utilities" (OuterVolumeSpecName: "utilities") pod "f2bd9d1a-b72a-43d4-9e10-579590ca19ed" (UID: "f2bd9d1a-b72a-43d4-9e10-579590ca19ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.335274 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.352739 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-kube-api-access-gth88" (OuterVolumeSpecName: "kube-api-access-gth88") pod "f2bd9d1a-b72a-43d4-9e10-579590ca19ed" (UID: "f2bd9d1a-b72a-43d4-9e10-579590ca19ed"). InnerVolumeSpecName "kube-api-access-gth88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.356487 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2bd9d1a-b72a-43d4-9e10-579590ca19ed" (UID: "f2bd9d1a-b72a-43d4-9e10-579590ca19ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.437282 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gth88\" (UniqueName: \"kubernetes.io/projected/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-kube-api-access-gth88\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.437322 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd9d1a-b72a-43d4-9e10-579590ca19ed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.475088 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg4x" event={"ID":"f2bd9d1a-b72a-43d4-9e10-579590ca19ed","Type":"ContainerDied","Data":"3216b430112df33a0084812eed0c54805eee0fec4b55874aa6c1f4c29e87694a"} Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.475133 4718 scope.go:117] "RemoveContainer" containerID="1d4d69709938d515f386d12b6672b1fa67bb687fdf01f94df1bb0419aeb71a17" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.475254 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg4x" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.511184 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg4x"] Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.515706 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg4x"] Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.524247 4718 scope.go:117] "RemoveContainer" containerID="b417cd5e80c3a9cc8fa407f8ba3392a9be84c75092ed01b8f0cdb3bece48e171" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.542325 4718 scope.go:117] "RemoveContainer" containerID="011b1d820f502e4344daa7b73dda771e5064a8716249d918a8eb1e1ef9bc21ce" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.878029 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lfvt8"] Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.895457 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-f28mj"] Nov 23 14:59:45 crc kubenswrapper[4718]: E1123 14:59:45.895748 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="registry-server" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.895765 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="registry-server" Nov 23 14:59:45 crc kubenswrapper[4718]: E1123 14:59:45.895778 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="extract-utilities" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.895784 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="extract-utilities" Nov 23 14:59:45 crc kubenswrapper[4718]: E1123 14:59:45.895811 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="extract-content" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.895818 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="extract-content" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.895945 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" containerName="registry-server" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.896788 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:45 crc kubenswrapper[4718]: I1123 14:59:45.953253 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-f28mj"] Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.055030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-config\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.055087 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dft66\" (UniqueName: \"kubernetes.io/projected/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-kube-api-access-dft66\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.055212 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-dns-svc\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.156689 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-dns-svc\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.156733 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-config\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.156757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dft66\" (UniqueName: \"kubernetes.io/projected/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-kube-api-access-dft66\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.158649 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-dns-svc\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.160352 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6l9m5"] Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.161975 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-config\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.178070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dft66\" (UniqueName: \"kubernetes.io/projected/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-kube-api-access-dft66\") pod \"dnsmasq-dns-666b6646f7-f28mj\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.190158 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lhjc2"] Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.191306 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.211147 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.212379 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lhjc2"] Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.359639 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.359897 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-config\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.359958 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsn2\" (UniqueName: \"kubernetes.io/projected/32168efd-2782-4ed5-9422-8c77a0a04955-kube-api-access-5vsn2\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.454303 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bd9d1a-b72a-43d4-9e10-579590ca19ed" path="/var/lib/kubelet/pods/f2bd9d1a-b72a-43d4-9e10-579590ca19ed/volumes" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.462098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsn2\" (UniqueName: \"kubernetes.io/projected/32168efd-2782-4ed5-9422-8c77a0a04955-kube-api-access-5vsn2\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.462396 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.463143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-config\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.463377 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.464217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-config\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.506134 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsn2\" (UniqueName: \"kubernetes.io/projected/32168efd-2782-4ed5-9422-8c77a0a04955-kube-api-access-5vsn2\") pod \"dnsmasq-dns-57d769cc4f-lhjc2\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.522203 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.724276 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-f28mj"] Nov 23 14:59:46 crc kubenswrapper[4718]: W1123 14:59:46.731410 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9976daf8_2e2d_4a74_b9ef_3c052cbe1972.slice/crio-a899d4c0750085bd7d6102a30a387bb5e9b16423ac1281f80f0f84c31ab68eca WatchSource:0}: Error finding container a899d4c0750085bd7d6102a30a387bb5e9b16423ac1281f80f0f84c31ab68eca: Status 404 returned error can't find the container with id a899d4c0750085bd7d6102a30a387bb5e9b16423ac1281f80f0f84c31ab68eca Nov 23 14:59:46 crc kubenswrapper[4718]: I1123 14:59:46.943520 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lhjc2"] Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.082487 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.083708 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.094266 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.098256 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.099177 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.099940 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.100488 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-np26s" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.100752 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.102781 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.124163 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.172360 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w27vj\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-kube-api-access-w27vj\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.172457 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.172483 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.172764 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/177531fd-3e9f-43b3-9540-a1a59957523e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.172860 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.172992 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.173160 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.173594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/177531fd-3e9f-43b3-9540-a1a59957523e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.173752 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.173834 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.173894 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-config-data\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275754 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w27vj\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-kube-api-access-w27vj\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275800 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275857 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/177531fd-3e9f-43b3-9540-a1a59957523e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275874 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275901 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275932 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/177531fd-3e9f-43b3-9540-a1a59957523e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.275987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.276018 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-config-data\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.276038 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.276578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.276692 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.277654 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-config-data\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.277885 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.278049 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.278460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.282716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.285109 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.285877 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/177531fd-3e9f-43b3-9540-a1a59957523e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.291656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/177531fd-3e9f-43b3-9540-a1a59957523e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.293248 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w27vj\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-kube-api-access-w27vj\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.297042 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.322707 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.327596 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.331560 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.331989 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.332119 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dgjq4" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.332285 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.332514 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.333394 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.333894 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.334049 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.424858 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.478816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz78q\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-kube-api-access-xz78q\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.478904 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.478952 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.478984 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.479091 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.479162 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.479291 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.479314 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.479804 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.479935 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.479979 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.497049 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" event={"ID":"9976daf8-2e2d-4a74-b9ef-3c052cbe1972","Type":"ContainerStarted","Data":"a899d4c0750085bd7d6102a30a387bb5e9b16423ac1281f80f0f84c31ab68eca"} Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.498145 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" event={"ID":"32168efd-2782-4ed5-9422-8c77a0a04955","Type":"ContainerStarted","Data":"a090706bcd6151bb48eea223ccd583e956520cb35a0fe23e78f4c5a9f18f8091"} Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581671 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581719 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581775 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581801 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581814 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581840 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz78q\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-kube-api-access-xz78q\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581881 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581900 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.581929 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.582899 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.583538 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.583972 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.584029 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.584019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.584499 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.590337 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.599230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.599230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.601395 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.601969 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz78q\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-kube-api-access-xz78q\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.603602 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.671402 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 14:59:47 crc kubenswrapper[4718]: I1123 14:59:47.845268 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.005333 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.014832 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.016542 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.017722 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.017809 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.018596 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.019513 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hdn65" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.023353 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.111221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-kolla-config\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.114102 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdlpd\" (UniqueName: \"kubernetes.io/projected/adbd2274-f81b-4930-85c1-eec8a7a3790d-kube-api-access-sdlpd\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.114179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/adbd2274-f81b-4930-85c1-eec8a7a3790d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.114272 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.114299 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/adbd2274-f81b-4930-85c1-eec8a7a3790d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.114331 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.114387 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbd2274-f81b-4930-85c1-eec8a7a3790d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.114429 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-config-data-default\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.216720 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdlpd\" (UniqueName: \"kubernetes.io/projected/adbd2274-f81b-4930-85c1-eec8a7a3790d-kube-api-access-sdlpd\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.216798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/adbd2274-f81b-4930-85c1-eec8a7a3790d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.216854 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.216882 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/adbd2274-f81b-4930-85c1-eec8a7a3790d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.216911 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.216953 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbd2274-f81b-4930-85c1-eec8a7a3790d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.216989 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-config-data-default\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.217044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-kolla-config\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.217890 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-kolla-config\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.219692 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/adbd2274-f81b-4930-85c1-eec8a7a3790d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.223755 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.225865 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.227089 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/adbd2274-f81b-4930-85c1-eec8a7a3790d-config-data-default\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.234214 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbd2274-f81b-4930-85c1-eec8a7a3790d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.252273 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/adbd2274-f81b-4930-85c1-eec8a7a3790d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.258412 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdlpd\" (UniqueName: \"kubernetes.io/projected/adbd2274-f81b-4930-85c1-eec8a7a3790d-kube-api-access-sdlpd\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.271320 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"adbd2274-f81b-4930-85c1-eec8a7a3790d\") " pod="openstack/openstack-galera-0" Nov 23 14:59:49 crc kubenswrapper[4718]: I1123 14:59:49.346255 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.347535 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.349228 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.355846 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nf62v" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.356128 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.356279 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.356491 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.370729 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.436703 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.436798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9tfz\" (UniqueName: \"kubernetes.io/projected/da1e3b17-14ea-456e-a694-073e8fd4edaf-kube-api-access-l9tfz\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.436830 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.436877 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da1e3b17-14ea-456e-a694-073e8fd4edaf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.436901 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.436982 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.437021 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e3b17-14ea-456e-a694-073e8fd4edaf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.437061 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e3b17-14ea-456e-a694-073e8fd4edaf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538172 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9tfz\" (UniqueName: \"kubernetes.io/projected/da1e3b17-14ea-456e-a694-073e8fd4edaf-kube-api-access-l9tfz\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538209 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da1e3b17-14ea-456e-a694-073e8fd4edaf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538284 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538300 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e3b17-14ea-456e-a694-073e8fd4edaf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.538317 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e3b17-14ea-456e-a694-073e8fd4edaf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.539301 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da1e3b17-14ea-456e-a694-073e8fd4edaf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.540704 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.541080 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.541956 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1e3b17-14ea-456e-a694-073e8fd4edaf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.542186 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.543616 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e3b17-14ea-456e-a694-073e8fd4edaf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.544631 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e3b17-14ea-456e-a694-073e8fd4edaf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.564019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9tfz\" (UniqueName: \"kubernetes.io/projected/da1e3b17-14ea-456e-a694-073e8fd4edaf-kube-api-access-l9tfz\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.573709 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"da1e3b17-14ea-456e-a694-073e8fd4edaf\") " pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.646192 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.647113 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.649089 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7258j" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.649803 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.650420 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.687507 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.722400 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.741112 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5324d05-32a6-4859-9288-de1f3bd9389d-kolla-config\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.741182 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5324d05-32a6-4859-9288-de1f3bd9389d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.741207 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5324d05-32a6-4859-9288-de1f3bd9389d-config-data\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.741267 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv6vm\" (UniqueName: \"kubernetes.io/projected/b5324d05-32a6-4859-9288-de1f3bd9389d-kube-api-access-jv6vm\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.741475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5324d05-32a6-4859-9288-de1f3bd9389d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.843481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5324d05-32a6-4859-9288-de1f3bd9389d-kolla-config\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.843553 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5324d05-32a6-4859-9288-de1f3bd9389d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.843575 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5324d05-32a6-4859-9288-de1f3bd9389d-config-data\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.843611 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv6vm\" (UniqueName: \"kubernetes.io/projected/b5324d05-32a6-4859-9288-de1f3bd9389d-kube-api-access-jv6vm\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.843669 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5324d05-32a6-4859-9288-de1f3bd9389d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.844249 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5324d05-32a6-4859-9288-de1f3bd9389d-kolla-config\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.844267 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5324d05-32a6-4859-9288-de1f3bd9389d-config-data\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.849003 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5324d05-32a6-4859-9288-de1f3bd9389d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.851128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5324d05-32a6-4859-9288-de1f3bd9389d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.870945 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv6vm\" (UniqueName: \"kubernetes.io/projected/b5324d05-32a6-4859-9288-de1f3bd9389d-kube-api-access-jv6vm\") pod \"memcached-0\" (UID: \"b5324d05-32a6-4859-9288-de1f3bd9389d\") " pod="openstack/memcached-0" Nov 23 14:59:50 crc kubenswrapper[4718]: I1123 14:59:50.977044 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 14:59:51 crc kubenswrapper[4718]: W1123 14:59:51.587769 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod177531fd_3e9f_43b3_9540_a1a59957523e.slice/crio-00274cd7f93663f9688c98e0d56e09802fb290277dc13cead986582e9babba50 WatchSource:0}: Error finding container 00274cd7f93663f9688c98e0d56e09802fb290277dc13cead986582e9babba50: Status 404 returned error can't find the container with id 00274cd7f93663f9688c98e0d56e09802fb290277dc13cead986582e9babba50 Nov 23 14:59:52 crc kubenswrapper[4718]: I1123 14:59:52.050795 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 14:59:52 crc kubenswrapper[4718]: I1123 14:59:52.545369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"177531fd-3e9f-43b3-9540-a1a59957523e","Type":"ContainerStarted","Data":"00274cd7f93663f9688c98e0d56e09802fb290277dc13cead986582e9babba50"} Nov 23 14:59:52 crc kubenswrapper[4718]: I1123 14:59:52.940755 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 14:59:52 crc kubenswrapper[4718]: I1123 14:59:52.941626 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 14:59:52 crc kubenswrapper[4718]: I1123 14:59:52.943970 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-f6mf5" Nov 23 14:59:52 crc kubenswrapper[4718]: I1123 14:59:52.955482 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 14:59:52 crc kubenswrapper[4718]: I1123 14:59:52.979592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lf5\" (UniqueName: \"kubernetes.io/projected/7309bec0-7ad8-47d8-8f72-ba8944a161e2-kube-api-access-28lf5\") pod \"kube-state-metrics-0\" (UID: \"7309bec0-7ad8-47d8-8f72-ba8944a161e2\") " pod="openstack/kube-state-metrics-0" Nov 23 14:59:53 crc kubenswrapper[4718]: I1123 14:59:53.081228 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28lf5\" (UniqueName: \"kubernetes.io/projected/7309bec0-7ad8-47d8-8f72-ba8944a161e2-kube-api-access-28lf5\") pod \"kube-state-metrics-0\" (UID: \"7309bec0-7ad8-47d8-8f72-ba8944a161e2\") " pod="openstack/kube-state-metrics-0" Nov 23 14:59:53 crc kubenswrapper[4718]: I1123 14:59:53.106874 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28lf5\" (UniqueName: \"kubernetes.io/projected/7309bec0-7ad8-47d8-8f72-ba8944a161e2-kube-api-access-28lf5\") pod \"kube-state-metrics-0\" (UID: \"7309bec0-7ad8-47d8-8f72-ba8944a161e2\") " pod="openstack/kube-state-metrics-0" Nov 23 14:59:53 crc kubenswrapper[4718]: I1123 14:59:53.268396 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 14:59:54 crc kubenswrapper[4718]: W1123 14:59:54.001852 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53aec48a_9b4c_4d78_8eaf_bce48ccfd6ed.slice/crio-f36f91735966b1584661bc8d9b1f2b643c8f1c8255b655c5d2398ce013053f64 WatchSource:0}: Error finding container f36f91735966b1584661bc8d9b1f2b643c8f1c8255b655c5d2398ce013053f64: Status 404 returned error can't find the container with id f36f91735966b1584661bc8d9b1f2b643c8f1c8255b655c5d2398ce013053f64 Nov 23 14:59:54 crc kubenswrapper[4718]: I1123 14:59:54.562259 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed","Type":"ContainerStarted","Data":"f36f91735966b1584661bc8d9b1f2b643c8f1c8255b655c5d2398ce013053f64"} Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.237180 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dc8hm"] Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.238353 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.240353 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.241615 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.248315 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-82g5s" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.260403 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc8hm"] Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.349655 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-log-ovn\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.349697 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-run-ovn\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.349729 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs6z5\" (UniqueName: \"kubernetes.io/projected/65b3425b-bedb-4274-a600-091b1910a2d7-kube-api-access-vs6z5\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.349776 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b3425b-bedb-4274-a600-091b1910a2d7-ovn-controller-tls-certs\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.349814 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-run\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.349832 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b3425b-bedb-4274-a600-091b1910a2d7-combined-ca-bundle\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.349846 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65b3425b-bedb-4274-a600-091b1910a2d7-scripts\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.350855 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-86mpq"] Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.352762 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.372793 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-86mpq"] Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.450837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-run\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.450905 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b3425b-bedb-4274-a600-091b1910a2d7-ovn-controller-tls-certs\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.450990 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-run\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b3425b-bedb-4274-a600-091b1910a2d7-combined-ca-bundle\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451038 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65b3425b-bedb-4274-a600-091b1910a2d7-scripts\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451059 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-etc-ovs\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451082 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qd96\" (UniqueName: \"kubernetes.io/projected/e6e54f9e-4a86-41d3-9723-9455c682fddc-kube-api-access-9qd96\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e54f9e-4a86-41d3-9723-9455c682fddc-scripts\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-log\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451150 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-log-ovn\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451176 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-run-ovn\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451199 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-lib\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451217 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs6z5\" (UniqueName: \"kubernetes.io/projected/65b3425b-bedb-4274-a600-091b1910a2d7-kube-api-access-vs6z5\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451662 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-run-ovn\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-log-ovn\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.451829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/65b3425b-bedb-4274-a600-091b1910a2d7-var-run\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.454077 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65b3425b-bedb-4274-a600-091b1910a2d7-scripts\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.456278 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b3425b-bedb-4274-a600-091b1910a2d7-ovn-controller-tls-certs\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.456381 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b3425b-bedb-4274-a600-091b1910a2d7-combined-ca-bundle\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.468975 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs6z5\" (UniqueName: \"kubernetes.io/projected/65b3425b-bedb-4274-a600-091b1910a2d7-kube-api-access-vs6z5\") pod \"ovn-controller-dc8hm\" (UID: \"65b3425b-bedb-4274-a600-091b1910a2d7\") " pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.552933 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-log\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553368 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-lib\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553511 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-lib\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553517 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-run\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553270 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-log\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553749 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-var-run\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553756 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-etc-ovs\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qd96\" (UniqueName: \"kubernetes.io/projected/e6e54f9e-4a86-41d3-9723-9455c682fddc-kube-api-access-9qd96\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.554070 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e54f9e-4a86-41d3-9723-9455c682fddc-scripts\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.553885 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e6e54f9e-4a86-41d3-9723-9455c682fddc-etc-ovs\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.556665 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6e54f9e-4a86-41d3-9723-9455c682fddc-scripts\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.562219 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc8hm" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.581966 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qd96\" (UniqueName: \"kubernetes.io/projected/e6e54f9e-4a86-41d3-9723-9455c682fddc-kube-api-access-9qd96\") pod \"ovn-controller-ovs-86mpq\" (UID: \"e6e54f9e-4a86-41d3-9723-9455c682fddc\") " pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:57 crc kubenswrapper[4718]: I1123 14:59:57.693323 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-86mpq" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.064010 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.065269 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.067264 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.067654 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.067772 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.068127 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pmfw6" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.068275 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.105956 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d2daa7-22e5-4713-9cc1-3d976c1559e3-config\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33d2daa7-22e5-4713-9cc1-3d976c1559e3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185149 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185252 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgc6\" (UniqueName: \"kubernetes.io/projected/33d2daa7-22e5-4713-9cc1-3d976c1559e3-kube-api-access-msgc6\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185318 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185338 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33d2daa7-22e5-4713-9cc1-3d976c1559e3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.185364 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286356 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33d2daa7-22e5-4713-9cc1-3d976c1559e3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286533 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286665 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msgc6\" (UniqueName: \"kubernetes.io/projected/33d2daa7-22e5-4713-9cc1-3d976c1559e3-kube-api-access-msgc6\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286717 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286796 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33d2daa7-22e5-4713-9cc1-3d976c1559e3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286842 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.286876 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d2daa7-22e5-4713-9cc1-3d976c1559e3-config\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.287146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33d2daa7-22e5-4713-9cc1-3d976c1559e3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.287343 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.288398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d2daa7-22e5-4713-9cc1-3d976c1559e3-config\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.288848 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33d2daa7-22e5-4713-9cc1-3d976c1559e3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.290257 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.290716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.300178 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d2daa7-22e5-4713-9cc1-3d976c1559e3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.308373 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.312774 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msgc6\" (UniqueName: \"kubernetes.io/projected/33d2daa7-22e5-4713-9cc1-3d976c1559e3-kube-api-access-msgc6\") pod \"ovsdbserver-sb-0\" (UID: \"33d2daa7-22e5-4713-9cc1-3d976c1559e3\") " pod="openstack/ovsdbserver-sb-0" Nov 23 14:59:59 crc kubenswrapper[4718]: I1123 14:59:59.394539 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.009952 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.011796 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.017533 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-w24n7" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.017780 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.017869 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.018365 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.027857 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.046545 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.109122 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0d21970-a68c-4d2b-bbcb-18ae83284d95-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.109794 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.109925 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.110069 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d21970-a68c-4d2b-bbcb-18ae83284d95-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.110106 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.110288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0d21970-a68c-4d2b-bbcb-18ae83284d95-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.110376 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.110553 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bq4v\" (UniqueName: \"kubernetes.io/projected/a0d21970-a68c-4d2b-bbcb-18ae83284d95-kube-api-access-6bq4v\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.132499 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq"] Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.134901 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.138199 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.138469 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.143943 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq"] Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212755 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0d21970-a68c-4d2b-bbcb-18ae83284d95-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212807 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212845 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212866 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae77b8e6-84d5-4680-a816-dced35246342-secret-volume\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212890 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d21970-a68c-4d2b-bbcb-18ae83284d95-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212906 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212924 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7x8\" (UniqueName: \"kubernetes.io/projected/ae77b8e6-84d5-4680-a816-dced35246342-kube-api-access-jj7x8\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212962 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0d21970-a68c-4d2b-bbcb-18ae83284d95-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212976 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae77b8e6-84d5-4680-a816-dced35246342-config-volume\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.212996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.213023 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bq4v\" (UniqueName: \"kubernetes.io/projected/a0d21970-a68c-4d2b-bbcb-18ae83284d95-kube-api-access-6bq4v\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.213931 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0d21970-a68c-4d2b-bbcb-18ae83284d95-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.214500 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0d21970-a68c-4d2b-bbcb-18ae83284d95-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.214685 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.216204 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d21970-a68c-4d2b-bbcb-18ae83284d95-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.221776 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.221782 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.223891 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d21970-a68c-4d2b-bbcb-18ae83284d95-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.236158 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bq4v\" (UniqueName: \"kubernetes.io/projected/a0d21970-a68c-4d2b-bbcb-18ae83284d95-kube-api-access-6bq4v\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.239779 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0d21970-a68c-4d2b-bbcb-18ae83284d95\") " pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.314893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7x8\" (UniqueName: \"kubernetes.io/projected/ae77b8e6-84d5-4680-a816-dced35246342-kube-api-access-jj7x8\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.314985 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae77b8e6-84d5-4680-a816-dced35246342-config-volume\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.315085 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae77b8e6-84d5-4680-a816-dced35246342-secret-volume\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.317639 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae77b8e6-84d5-4680-a816-dced35246342-config-volume\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.320163 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae77b8e6-84d5-4680-a816-dced35246342-secret-volume\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.334548 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7x8\" (UniqueName: \"kubernetes.io/projected/ae77b8e6-84d5-4680-a816-dced35246342-kube-api-access-jj7x8\") pod \"collect-profiles-29398500-qk4nq\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.345382 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:00 crc kubenswrapper[4718]: I1123 15:00:00.453951 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:08 crc kubenswrapper[4718]: I1123 15:00:08.680762 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7309bec0-7ad8-47d8-8f72-ba8944a161e2","Type":"ContainerStarted","Data":"7f52effd6a639f019cca851c2a43aa50badf5bb8c55afcb3e86754240ae4f44f"} Nov 23 15:00:16 crc kubenswrapper[4718]: E1123 15:00:16.407294 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 23 15:00:16 crc kubenswrapper[4718]: E1123 15:00:16.408034 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w27vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(177531fd-3e9f-43b3-9540-a1a59957523e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:00:16 crc kubenswrapper[4718]: E1123 15:00:16.409249 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" Nov 23 15:00:16 crc kubenswrapper[4718]: E1123 15:00:16.756764 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.439786 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.440359 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vsn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-lhjc2_openstack(32168efd-2782-4ed5-9422-8c77a0a04955): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.443636 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" podUID="32168efd-2782-4ed5-9422-8c77a0a04955" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.571193 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.571342 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmq2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6l9m5_openstack(d6998dfc-d923-4ce3-8957-b1aab8aceba2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.572570 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" podUID="d6998dfc-d923-4ce3-8957-b1aab8aceba2" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.600211 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.600614 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlc72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lfvt8_openstack(c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.601912 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" podUID="c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.667664 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.667875 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz78q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.669864 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.765210 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.768037 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.768323 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dft66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-f28mj_openstack(9976daf8-2e2d-4a74-b9ef-3c052cbe1972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.769845 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" podUID="9976daf8-2e2d-4a74-b9ef-3c052cbe1972" Nov 23 15:00:17 crc kubenswrapper[4718]: E1123 15:00:17.772475 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" podUID="32168efd-2782-4ed5-9422-8c77a0a04955" Nov 23 15:00:17 crc kubenswrapper[4718]: I1123 15:00:17.785577 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 15:00:18 crc kubenswrapper[4718]: W1123 15:00:18.104516 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadbd2274_f81b_4930_85c1_eec8a7a3790d.slice/crio-44666f575abf689c7d2a250b8901299427d8da6923736f8dd89f256e86749d7e WatchSource:0}: Error finding container 44666f575abf689c7d2a250b8901299427d8da6923736f8dd89f256e86749d7e: Status 404 returned error can't find the container with id 44666f575abf689c7d2a250b8901299427d8da6923736f8dd89f256e86749d7e Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.153343 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.166389 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.275544 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.284015 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc8hm"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.363955 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-86mpq"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.501226 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.603650 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 15:00:18 crc kubenswrapper[4718]: W1123 15:00:18.635176 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e54f9e_4a86_41d3_9723_9455c682fddc.slice/crio-6cc55b669563e413b88111df734b82a8d85438960de9e481365d16c67141bb88 WatchSource:0}: Error finding container 6cc55b669563e413b88111df734b82a8d85438960de9e481365d16c67141bb88: Status 404 returned error can't find the container with id 6cc55b669563e413b88111df734b82a8d85438960de9e481365d16c67141bb88 Nov 23 15:00:18 crc kubenswrapper[4718]: W1123 15:00:18.638267 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0d21970_a68c_4d2b_bbcb_18ae83284d95.slice/crio-bdb74716a5f3c54004d05130ca2c12d159e21d7c3145c04546250930088c20ee WatchSource:0}: Error finding container bdb74716a5f3c54004d05130ca2c12d159e21d7c3145c04546250930088c20ee: Status 404 returned error can't find the container with id bdb74716a5f3c54004d05130ca2c12d159e21d7c3145c04546250930088c20ee Nov 23 15:00:18 crc kubenswrapper[4718]: W1123 15:00:18.640770 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae77b8e6_84d5_4680_a816_dced35246342.slice/crio-ff8fb2729c01f5110a8af7db244ad1ef779e38917a20173ba07770dc12af2aff WatchSource:0}: Error finding container ff8fb2729c01f5110a8af7db244ad1ef779e38917a20173ba07770dc12af2aff: Status 404 returned error can't find the container with id ff8fb2729c01f5110a8af7db244ad1ef779e38917a20173ba07770dc12af2aff Nov 23 15:00:18 crc kubenswrapper[4718]: W1123 15:00:18.642820 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65b3425b_bedb_4274_a600_091b1910a2d7.slice/crio-405660863d817a47cd75b05b67cdfe965760887be9d80585de282106bcb0e3a8 WatchSource:0}: Error finding container 405660863d817a47cd75b05b67cdfe965760887be9d80585de282106bcb0e3a8: Status 404 returned error can't find the container with id 405660863d817a47cd75b05b67cdfe965760887be9d80585de282106bcb0e3a8 Nov 23 15:00:18 crc kubenswrapper[4718]: W1123 15:00:18.643288 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5324d05_32a6_4859_9288_de1f3bd9389d.slice/crio-cb8719560db7e996372e55548ae640de52bb11a3296f2e945311790c81c359fc WatchSource:0}: Error finding container cb8719560db7e996372e55548ae640de52bb11a3296f2e945311790c81c359fc: Status 404 returned error can't find the container with id cb8719560db7e996372e55548ae640de52bb11a3296f2e945311790c81c359fc Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.700461 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.704007 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-config\") pod \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.704071 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmq2r\" (UniqueName: \"kubernetes.io/projected/d6998dfc-d923-4ce3-8957-b1aab8aceba2-kube-api-access-dmq2r\") pod \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.704165 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-dns-svc\") pod \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\" (UID: \"d6998dfc-d923-4ce3-8957-b1aab8aceba2\") " Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.705022 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6998dfc-d923-4ce3-8957-b1aab8aceba2" (UID: "d6998dfc-d923-4ce3-8957-b1aab8aceba2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.705370 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-config" (OuterVolumeSpecName: "config") pod "d6998dfc-d923-4ce3-8957-b1aab8aceba2" (UID: "d6998dfc-d923-4ce3-8957-b1aab8aceba2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.709151 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6998dfc-d923-4ce3-8957-b1aab8aceba2-kube-api-access-dmq2r" (OuterVolumeSpecName: "kube-api-access-dmq2r") pod "d6998dfc-d923-4ce3-8957-b1aab8aceba2" (UID: "d6998dfc-d923-4ce3-8957-b1aab8aceba2"). InnerVolumeSpecName "kube-api-access-dmq2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.771041 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0d21970-a68c-4d2b-bbcb-18ae83284d95","Type":"ContainerStarted","Data":"bdb74716a5f3c54004d05130ca2c12d159e21d7c3145c04546250930088c20ee"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.772166 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"da1e3b17-14ea-456e-a694-073e8fd4edaf","Type":"ContainerStarted","Data":"487649ed47a5b2ba666f7aa211073eec1a7bb4b0101759085d0e063791068e5f"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.774651 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" event={"ID":"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3","Type":"ContainerDied","Data":"ee1f894986a6d2099342b13eb79cee467e6f9c56aa7b068628ede8f45f86fb8d"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.774773 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lfvt8" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.775659 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mpq" event={"ID":"e6e54f9e-4a86-41d3-9723-9455c682fddc","Type":"ContainerStarted","Data":"6cc55b669563e413b88111df734b82a8d85438960de9e481365d16c67141bb88"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.777460 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"adbd2274-f81b-4930-85c1-eec8a7a3790d","Type":"ContainerStarted","Data":"44666f575abf689c7d2a250b8901299427d8da6923736f8dd89f256e86749d7e"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.778403 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" event={"ID":"d6998dfc-d923-4ce3-8957-b1aab8aceba2","Type":"ContainerDied","Data":"7cecca931aee54001ae0f564f0e118e4291f8886a6b9eec644fb261c846bc3e8"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.778419 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6l9m5" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.779688 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc8hm" event={"ID":"65b3425b-bedb-4274-a600-091b1910a2d7","Type":"ContainerStarted","Data":"405660863d817a47cd75b05b67cdfe965760887be9d80585de282106bcb0e3a8"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.780968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" event={"ID":"ae77b8e6-84d5-4680-a816-dced35246342","Type":"ContainerStarted","Data":"ff8fb2729c01f5110a8af7db244ad1ef779e38917a20173ba07770dc12af2aff"} Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.782121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b5324d05-32a6-4859-9288-de1f3bd9389d","Type":"ContainerStarted","Data":"cb8719560db7e996372e55548ae640de52bb11a3296f2e945311790c81c359fc"} Nov 23 15:00:18 crc kubenswrapper[4718]: E1123 15:00:18.784571 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" podUID="9976daf8-2e2d-4a74-b9ef-3c052cbe1972" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.806168 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlc72\" (UniqueName: \"kubernetes.io/projected/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-kube-api-access-nlc72\") pod \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.806347 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-config\") pod \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\" (UID: \"c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3\") " Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.806857 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.806873 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6998dfc-d923-4ce3-8957-b1aab8aceba2-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.806885 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmq2r\" (UniqueName: \"kubernetes.io/projected/d6998dfc-d923-4ce3-8957-b1aab8aceba2-kube-api-access-dmq2r\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.809918 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-config" (OuterVolumeSpecName: "config") pod "c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3" (UID: "c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.811941 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-kube-api-access-nlc72" (OuterVolumeSpecName: "kube-api-access-nlc72") pod "c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3" (UID: "c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3"). InnerVolumeSpecName "kube-api-access-nlc72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.847664 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6l9m5"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.852608 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6l9m5"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.894312 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.908270 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:18 crc kubenswrapper[4718]: I1123 15:00:18.908298 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlc72\" (UniqueName: \"kubernetes.io/projected/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3-kube-api-access-nlc72\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:19 crc kubenswrapper[4718]: I1123 15:00:19.125057 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lfvt8"] Nov 23 15:00:19 crc kubenswrapper[4718]: I1123 15:00:19.143665 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lfvt8"] Nov 23 15:00:19 crc kubenswrapper[4718]: I1123 15:00:19.791333 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"33d2daa7-22e5-4713-9cc1-3d976c1559e3","Type":"ContainerStarted","Data":"f945bc19e55673a96a683aa14cb30764a0807bb12204b50fb87369cbf62ca374"} Nov 23 15:00:20 crc kubenswrapper[4718]: I1123 15:00:20.453261 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3" path="/var/lib/kubelet/pods/c3f9c74f-aa79-43c3-a1d3-883ddf1d0bb3/volumes" Nov 23 15:00:20 crc kubenswrapper[4718]: I1123 15:00:20.453946 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6998dfc-d923-4ce3-8957-b1aab8aceba2" path="/var/lib/kubelet/pods/d6998dfc-d923-4ce3-8957-b1aab8aceba2/volumes" Nov 23 15:00:23 crc kubenswrapper[4718]: I1123 15:00:23.053579 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:00:23 crc kubenswrapper[4718]: I1123 15:00:23.054004 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:00:26 crc kubenswrapper[4718]: I1123 15:00:26.851652 4718 generic.go:334] "Generic (PLEG): container finished" podID="ae77b8e6-84d5-4680-a816-dced35246342" containerID="2c9435f41e32628f6136745e3e8ede0d36aa1c4c5fc5e600b81656983549e9ec" exitCode=0 Nov 23 15:00:26 crc kubenswrapper[4718]: I1123 15:00:26.851728 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" event={"ID":"ae77b8e6-84d5-4680-a816-dced35246342","Type":"ContainerDied","Data":"2c9435f41e32628f6136745e3e8ede0d36aa1c4c5fc5e600b81656983549e9ec"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.202989 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.368821 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae77b8e6-84d5-4680-a816-dced35246342-secret-volume\") pod \"ae77b8e6-84d5-4680-a816-dced35246342\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.368872 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae77b8e6-84d5-4680-a816-dced35246342-config-volume\") pod \"ae77b8e6-84d5-4680-a816-dced35246342\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.368905 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj7x8\" (UniqueName: \"kubernetes.io/projected/ae77b8e6-84d5-4680-a816-dced35246342-kube-api-access-jj7x8\") pod \"ae77b8e6-84d5-4680-a816-dced35246342\" (UID: \"ae77b8e6-84d5-4680-a816-dced35246342\") " Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.371591 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae77b8e6-84d5-4680-a816-dced35246342-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae77b8e6-84d5-4680-a816-dced35246342" (UID: "ae77b8e6-84d5-4680-a816-dced35246342"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.374066 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae77b8e6-84d5-4680-a816-dced35246342-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae77b8e6-84d5-4680-a816-dced35246342" (UID: "ae77b8e6-84d5-4680-a816-dced35246342"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.374328 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae77b8e6-84d5-4680-a816-dced35246342-kube-api-access-jj7x8" (OuterVolumeSpecName: "kube-api-access-jj7x8") pod "ae77b8e6-84d5-4680-a816-dced35246342" (UID: "ae77b8e6-84d5-4680-a816-dced35246342"). InnerVolumeSpecName "kube-api-access-jj7x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.470887 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae77b8e6-84d5-4680-a816-dced35246342-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.470920 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae77b8e6-84d5-4680-a816-dced35246342-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.470931 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj7x8\" (UniqueName: \"kubernetes.io/projected/ae77b8e6-84d5-4680-a816-dced35246342-kube-api-access-jj7x8\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.870731 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0d21970-a68c-4d2b-bbcb-18ae83284d95","Type":"ContainerStarted","Data":"d4b3ef3dce56a660badfd85b3c4fc4bf2f3cb96f7e656256539321d5b0daca26"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.872136 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b5324d05-32a6-4859-9288-de1f3bd9389d","Type":"ContainerStarted","Data":"2b89bc88117135f516e3ccb0578a2930818bb62b1f125d11e7e2bb3013a560db"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.873118 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.875178 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mpq" event={"ID":"e6e54f9e-4a86-41d3-9723-9455c682fddc","Type":"ContainerStarted","Data":"d224c88813c7688515a195b11ef23ca25bdb6c0f7797a21333f499cc69ec3678"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.877594 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc8hm" event={"ID":"65b3425b-bedb-4274-a600-091b1910a2d7","Type":"ContainerStarted","Data":"375d56ad2b0c187cf6e347392fbff0d79e92838f7fbbfcfe320f2e1bd106cd1a"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.877747 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dc8hm" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.879663 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.880352 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq" event={"ID":"ae77b8e6-84d5-4680-a816-dced35246342","Type":"ContainerDied","Data":"ff8fb2729c01f5110a8af7db244ad1ef779e38917a20173ba07770dc12af2aff"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.880376 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8fb2729c01f5110a8af7db244ad1ef779e38917a20173ba07770dc12af2aff" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.883605 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"da1e3b17-14ea-456e-a694-073e8fd4edaf","Type":"ContainerStarted","Data":"3867c79ccedd71cf6d3bbaec5d58d590eb5bde5597e35576f7018b302c0118b7"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.886186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7309bec0-7ad8-47d8-8f72-ba8944a161e2","Type":"ContainerStarted","Data":"4d189af6aaaa28c4f6206b731bf20e4da52974ba5c8be4e5a72009ab629998c4"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.886310 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.887918 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"adbd2274-f81b-4930-85c1-eec8a7a3790d","Type":"ContainerStarted","Data":"584e6cd1708050ccdca854136bb59d2cbc13f385cca80bac07738b0902671b3e"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.890589 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"33d2daa7-22e5-4713-9cc1-3d976c1559e3","Type":"ContainerStarted","Data":"e5eb45fd3eeffd9ff30d9f31e5c87d685853a0aee46cc7d187cae9869d42409b"} Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.902594 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=29.92771941 podStartE2EDuration="38.902575692s" podCreationTimestamp="2025-11-23 14:59:50 +0000 UTC" firstStartedPulling="2025-11-23 15:00:18.660946664 +0000 UTC m=+869.900566508" lastFinishedPulling="2025-11-23 15:00:27.635802936 +0000 UTC m=+878.875422790" observedRunningTime="2025-11-23 15:00:28.898770649 +0000 UTC m=+880.138390493" watchObservedRunningTime="2025-11-23 15:00:28.902575692 +0000 UTC m=+880.142195536" Nov 23 15:00:28 crc kubenswrapper[4718]: I1123 15:00:28.941842 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dc8hm" podStartSLOduration=22.949913222 podStartE2EDuration="31.941824437s" podCreationTimestamp="2025-11-23 14:59:57 +0000 UTC" firstStartedPulling="2025-11-23 15:00:18.645643089 +0000 UTC m=+869.885262933" lastFinishedPulling="2025-11-23 15:00:27.637554294 +0000 UTC m=+878.877174148" observedRunningTime="2025-11-23 15:00:28.936152423 +0000 UTC m=+880.175772267" watchObservedRunningTime="2025-11-23 15:00:28.941824437 +0000 UTC m=+880.181444281" Nov 23 15:00:29 crc kubenswrapper[4718]: I1123 15:00:29.244522 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.872761779 podStartE2EDuration="37.244493712s" podCreationTimestamp="2025-11-23 14:59:52 +0000 UTC" firstStartedPulling="2025-11-23 15:00:08.116904129 +0000 UTC m=+859.356523983" lastFinishedPulling="2025-11-23 15:00:27.488636052 +0000 UTC m=+878.728255916" observedRunningTime="2025-11-23 15:00:29.002636228 +0000 UTC m=+880.242256082" watchObservedRunningTime="2025-11-23 15:00:29.244493712 +0000 UTC m=+880.484113566" Nov 23 15:00:29 crc kubenswrapper[4718]: I1123 15:00:29.901999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"177531fd-3e9f-43b3-9540-a1a59957523e","Type":"ContainerStarted","Data":"7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2"} Nov 23 15:00:29 crc kubenswrapper[4718]: I1123 15:00:29.905657 4718 generic.go:334] "Generic (PLEG): container finished" podID="e6e54f9e-4a86-41d3-9723-9455c682fddc" containerID="d224c88813c7688515a195b11ef23ca25bdb6c0f7797a21333f499cc69ec3678" exitCode=0 Nov 23 15:00:29 crc kubenswrapper[4718]: I1123 15:00:29.905751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mpq" event={"ID":"e6e54f9e-4a86-41d3-9723-9455c682fddc","Type":"ContainerDied","Data":"d224c88813c7688515a195b11ef23ca25bdb6c0f7797a21333f499cc69ec3678"} Nov 23 15:00:30 crc kubenswrapper[4718]: I1123 15:00:30.913672 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mpq" event={"ID":"e6e54f9e-4a86-41d3-9723-9455c682fddc","Type":"ContainerStarted","Data":"691caa1026b57079d11ff877c2340860ca0713164d0b98276e796975e1d9b23b"} Nov 23 15:00:32 crc kubenswrapper[4718]: I1123 15:00:32.930876 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mpq" event={"ID":"e6e54f9e-4a86-41d3-9723-9455c682fddc","Type":"ContainerStarted","Data":"c4976534791c691398cddfe003024cd2adce4c321af97c426cb3127fd0b1eba2"} Nov 23 15:00:32 crc kubenswrapper[4718]: I1123 15:00:32.937571 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed","Type":"ContainerStarted","Data":"d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c"} Nov 23 15:00:33 crc kubenswrapper[4718]: I1123 15:00:33.273517 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 15:00:33 crc kubenswrapper[4718]: I1123 15:00:33.947417 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-86mpq" Nov 23 15:00:33 crc kubenswrapper[4718]: I1123 15:00:33.947592 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-86mpq" Nov 23 15:00:33 crc kubenswrapper[4718]: I1123 15:00:33.981614 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-86mpq" podStartSLOduration=27.982017002 podStartE2EDuration="36.981595377s" podCreationTimestamp="2025-11-23 14:59:57 +0000 UTC" firstStartedPulling="2025-11-23 15:00:18.63836466 +0000 UTC m=+869.877984504" lastFinishedPulling="2025-11-23 15:00:27.637942995 +0000 UTC m=+878.877562879" observedRunningTime="2025-11-23 15:00:33.977755172 +0000 UTC m=+885.217375026" watchObservedRunningTime="2025-11-23 15:00:33.981595377 +0000 UTC m=+885.221215221" Nov 23 15:00:35 crc kubenswrapper[4718]: I1123 15:00:35.979370 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.333489 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-f28mj"] Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.370797 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-hspwc"] Nov 23 15:00:43 crc kubenswrapper[4718]: E1123 15:00:43.371114 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae77b8e6-84d5-4680-a816-dced35246342" containerName="collect-profiles" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.371132 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae77b8e6-84d5-4680-a816-dced35246342" containerName="collect-profiles" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.371306 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae77b8e6-84d5-4680-a816-dced35246342" containerName="collect-profiles" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.372081 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.387319 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-hspwc"] Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.459296 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.459394 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4fpn\" (UniqueName: \"kubernetes.io/projected/561f5ecd-5dfc-47a0-be21-b89eaf87f469-kube-api-access-w4fpn\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.459518 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-config\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.561403 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.561510 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4fpn\" (UniqueName: \"kubernetes.io/projected/561f5ecd-5dfc-47a0-be21-b89eaf87f469-kube-api-access-w4fpn\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.561567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-config\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.562480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-config\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.562756 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.578857 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4fpn\" (UniqueName: \"kubernetes.io/projected/561f5ecd-5dfc-47a0-be21-b89eaf87f469-kube-api-access-w4fpn\") pod \"dnsmasq-dns-7cb5889db5-hspwc\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:43 crc kubenswrapper[4718]: I1123 15:00:43.690373 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.492127 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.497826 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.500064 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.500405 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.500805 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.500985 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-wpcp2" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.529309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.578730 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.578798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ef94753b-867a-4e46-9ff8-66178f25efaa-cache\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.578960 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.579027 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzlx\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-kube-api-access-vzzlx\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.579140 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ef94753b-867a-4e46-9ff8-66178f25efaa-lock\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.680952 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.681005 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ef94753b-867a-4e46-9ff8-66178f25efaa-cache\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.681033 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.681053 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzlx\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-kube-api-access-vzzlx\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.681081 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ef94753b-867a-4e46-9ff8-66178f25efaa-lock\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: E1123 15:00:44.681120 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 15:00:44 crc kubenswrapper[4718]: E1123 15:00:44.681154 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 15:00:44 crc kubenswrapper[4718]: E1123 15:00:44.681204 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift podName:ef94753b-867a-4e46-9ff8-66178f25efaa nodeName:}" failed. No retries permitted until 2025-11-23 15:00:45.181187503 +0000 UTC m=+896.420807347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift") pod "swift-storage-0" (UID: "ef94753b-867a-4e46-9ff8-66178f25efaa") : configmap "swift-ring-files" not found Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.681394 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.681621 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ef94753b-867a-4e46-9ff8-66178f25efaa-lock\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.681853 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ef94753b-867a-4e46-9ff8-66178f25efaa-cache\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.696754 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzlx\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-kube-api-access-vzzlx\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.699495 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.964550 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sx654"] Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.966730 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.976384 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.976578 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 15:00:44 crc kubenswrapper[4718]: I1123 15:00:44.977204 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.001981 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sx654"] Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.089965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsnl\" (UniqueName: \"kubernetes.io/projected/79cae030-375c-4b0e-9dfc-e823f922196b-kube-api-access-qpsnl\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.090092 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-dispersionconf\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.090133 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-combined-ca-bundle\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.090161 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79cae030-375c-4b0e-9dfc-e823f922196b-etc-swift\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.090184 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-swiftconf\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.090568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-scripts\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.090689 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-ring-data-devices\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192525 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-dispersionconf\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192659 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-combined-ca-bundle\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79cae030-375c-4b0e-9dfc-e823f922196b-etc-swift\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192719 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-swiftconf\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-scripts\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: E1123 15:00:45.192761 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192790 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-ring-data-devices\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.192817 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsnl\" (UniqueName: \"kubernetes.io/projected/79cae030-375c-4b0e-9dfc-e823f922196b-kube-api-access-qpsnl\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: E1123 15:00:45.192795 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 15:00:45 crc kubenswrapper[4718]: E1123 15:00:45.193176 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift podName:ef94753b-867a-4e46-9ff8-66178f25efaa nodeName:}" failed. No retries permitted until 2025-11-23 15:00:46.1931573 +0000 UTC m=+897.432777144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift") pod "swift-storage-0" (UID: "ef94753b-867a-4e46-9ff8-66178f25efaa") : configmap "swift-ring-files" not found Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.193739 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79cae030-375c-4b0e-9dfc-e823f922196b-etc-swift\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.194150 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-ring-data-devices\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.194238 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-scripts\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.197016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-combined-ca-bundle\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.197687 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-dispersionconf\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.199191 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-swiftconf\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.218046 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsnl\" (UniqueName: \"kubernetes.io/projected/79cae030-375c-4b0e-9dfc-e823f922196b-kube-api-access-qpsnl\") pod \"swift-ring-rebalance-sx654\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:45 crc kubenswrapper[4718]: I1123 15:00:45.312665 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:00:46 crc kubenswrapper[4718]: I1123 15:00:46.212566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:46 crc kubenswrapper[4718]: E1123 15:00:46.212739 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 15:00:46 crc kubenswrapper[4718]: E1123 15:00:46.212943 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 15:00:46 crc kubenswrapper[4718]: E1123 15:00:46.213011 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift podName:ef94753b-867a-4e46-9ff8-66178f25efaa nodeName:}" failed. No retries permitted until 2025-11-23 15:00:48.212990323 +0000 UTC m=+899.452610167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift") pod "swift-storage-0" (UID: "ef94753b-867a-4e46-9ff8-66178f25efaa") : configmap "swift-ring-files" not found Nov 23 15:00:46 crc kubenswrapper[4718]: I1123 15:00:46.629985 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sx654"] Nov 23 15:00:46 crc kubenswrapper[4718]: W1123 15:00:46.634666 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79cae030_375c_4b0e_9dfc_e823f922196b.slice/crio-952233de002cfde8cca90b1c48abf3b74e15246e36443b57446452962fd5e06d WatchSource:0}: Error finding container 952233de002cfde8cca90b1c48abf3b74e15246e36443b57446452962fd5e06d: Status 404 returned error can't find the container with id 952233de002cfde8cca90b1c48abf3b74e15246e36443b57446452962fd5e06d Nov 23 15:00:46 crc kubenswrapper[4718]: I1123 15:00:46.689186 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-hspwc"] Nov 23 15:00:46 crc kubenswrapper[4718]: W1123 15:00:46.701516 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod561f5ecd_5dfc_47a0_be21_b89eaf87f469.slice/crio-4e49379f51ebfd27c42182b643916d9e4419243acb094d14dbb29439c4662359 WatchSource:0}: Error finding container 4e49379f51ebfd27c42182b643916d9e4419243acb094d14dbb29439c4662359: Status 404 returned error can't find the container with id 4e49379f51ebfd27c42182b643916d9e4419243acb094d14dbb29439c4662359 Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.046392 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"33d2daa7-22e5-4713-9cc1-3d976c1559e3","Type":"ContainerStarted","Data":"72fd60b6a8193eed0ccc7500102d19c55eb1a888fe14dd41f996d824d9b03a9c"} Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.047804 4718 generic.go:334] "Generic (PLEG): container finished" podID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerID="294a98e65e415248031cd4139dd0fd120e44c4824787f344b57061998a132975" exitCode=0 Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.047866 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" event={"ID":"561f5ecd-5dfc-47a0-be21-b89eaf87f469","Type":"ContainerDied","Data":"294a98e65e415248031cd4139dd0fd120e44c4824787f344b57061998a132975"} Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.047882 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" event={"ID":"561f5ecd-5dfc-47a0-be21-b89eaf87f469","Type":"ContainerStarted","Data":"4e49379f51ebfd27c42182b643916d9e4419243acb094d14dbb29439c4662359"} Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.049055 4718 generic.go:334] "Generic (PLEG): container finished" podID="9976daf8-2e2d-4a74-b9ef-3c052cbe1972" containerID="6feaf157aa06d997078d18643458babde8a58dcc0a6f82ddf69c4a78b7721445" exitCode=0 Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.049123 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" event={"ID":"9976daf8-2e2d-4a74-b9ef-3c052cbe1972","Type":"ContainerDied","Data":"6feaf157aa06d997078d18643458babde8a58dcc0a6f82ddf69c4a78b7721445"} Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.051524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0d21970-a68c-4d2b-bbcb-18ae83284d95","Type":"ContainerStarted","Data":"ae69d8b2ad8868144c8cea4b62af768d31659291ce3bf4c5e71b2d2c7e2eabe3"} Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.053116 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sx654" event={"ID":"79cae030-375c-4b0e-9dfc-e823f922196b","Type":"ContainerStarted","Data":"952233de002cfde8cca90b1c48abf3b74e15246e36443b57446452962fd5e06d"} Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.054551 4718 generic.go:334] "Generic (PLEG): container finished" podID="32168efd-2782-4ed5-9422-8c77a0a04955" containerID="0e07fc4fbcdb0039567f783812286d7dccd0ce4271e52c50868c4a613c7d442c" exitCode=0 Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.054585 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" event={"ID":"32168efd-2782-4ed5-9422-8c77a0a04955","Type":"ContainerDied","Data":"0e07fc4fbcdb0039567f783812286d7dccd0ce4271e52c50868c4a613c7d442c"} Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.077168 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.851584172 podStartE2EDuration="49.077152029s" podCreationTimestamp="2025-11-23 14:59:58 +0000 UTC" firstStartedPulling="2025-11-23 15:00:18.932159975 +0000 UTC m=+870.171779819" lastFinishedPulling="2025-11-23 15:00:46.157727832 +0000 UTC m=+897.397347676" observedRunningTime="2025-11-23 15:00:47.07093854 +0000 UTC m=+898.310558374" watchObservedRunningTime="2025-11-23 15:00:47.077152029 +0000 UTC m=+898.316771883" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.095528 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.473408697 podStartE2EDuration="49.095507507s" podCreationTimestamp="2025-11-23 14:59:58 +0000 UTC" firstStartedPulling="2025-11-23 15:00:18.642872853 +0000 UTC m=+869.882492697" lastFinishedPulling="2025-11-23 15:00:46.264971663 +0000 UTC m=+897.504591507" observedRunningTime="2025-11-23 15:00:47.08860084 +0000 UTC m=+898.328220704" watchObservedRunningTime="2025-11-23 15:00:47.095507507 +0000 UTC m=+898.335127351" Nov 23 15:00:47 crc kubenswrapper[4718]: E1123 15:00:47.299668 4718 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 23 15:00:47 crc kubenswrapper[4718]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/32168efd-2782-4ed5-9422-8c77a0a04955/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 15:00:47 crc kubenswrapper[4718]: > podSandboxID="a090706bcd6151bb48eea223ccd583e956520cb35a0fe23e78f4c5a9f18f8091" Nov 23 15:00:47 crc kubenswrapper[4718]: E1123 15:00:47.300125 4718 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 23 15:00:47 crc kubenswrapper[4718]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vsn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-lhjc2_openstack(32168efd-2782-4ed5-9422-8c77a0a04955): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/32168efd-2782-4ed5-9422-8c77a0a04955/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 15:00:47 crc kubenswrapper[4718]: > logger="UnhandledError" Nov 23 15:00:47 crc kubenswrapper[4718]: E1123 15:00:47.301195 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/32168efd-2782-4ed5-9422-8c77a0a04955/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" podUID="32168efd-2782-4ed5-9422-8c77a0a04955" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.370808 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.394743 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.433478 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-config\") pod \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.433585 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-dns-svc\") pod \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.433861 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dft66\" (UniqueName: \"kubernetes.io/projected/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-kube-api-access-dft66\") pod \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\" (UID: \"9976daf8-2e2d-4a74-b9ef-3c052cbe1972\") " Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.442556 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-kube-api-access-dft66" (OuterVolumeSpecName: "kube-api-access-dft66") pod "9976daf8-2e2d-4a74-b9ef-3c052cbe1972" (UID: "9976daf8-2e2d-4a74-b9ef-3c052cbe1972"). InnerVolumeSpecName "kube-api-access-dft66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.455328 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.464998 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9976daf8-2e2d-4a74-b9ef-3c052cbe1972" (UID: "9976daf8-2e2d-4a74-b9ef-3c052cbe1972"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.481353 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-config" (OuterVolumeSpecName: "config") pod "9976daf8-2e2d-4a74-b9ef-3c052cbe1972" (UID: "9976daf8-2e2d-4a74-b9ef-3c052cbe1972"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.535736 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.535780 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:47 crc kubenswrapper[4718]: I1123 15:00:47.535941 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dft66\" (UniqueName: \"kubernetes.io/projected/9976daf8-2e2d-4a74-b9ef-3c052cbe1972-kube-api-access-dft66\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.066352 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" event={"ID":"561f5ecd-5dfc-47a0-be21-b89eaf87f469","Type":"ContainerStarted","Data":"de44a6b05be6f9018e2b8192d2a225e5f3badeacd19b24c7761741f28c464dcd"} Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.066664 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.068986 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.068995 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-f28mj" event={"ID":"9976daf8-2e2d-4a74-b9ef-3c052cbe1972","Type":"ContainerDied","Data":"a899d4c0750085bd7d6102a30a387bb5e9b16423ac1281f80f0f84c31ab68eca"} Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.069036 4718 scope.go:117] "RemoveContainer" containerID="6feaf157aa06d997078d18643458babde8a58dcc0a6f82ddf69c4a78b7721445" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.069659 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.094378 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" podStartSLOduration=5.09435833 podStartE2EDuration="5.09435833s" podCreationTimestamp="2025-11-23 15:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:00:48.088422909 +0000 UTC m=+899.328042753" watchObservedRunningTime="2025-11-23 15:00:48.09435833 +0000 UTC m=+899.333978174" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.132325 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.144840 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-f28mj"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.150831 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-f28mj"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.255515 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:48 crc kubenswrapper[4718]: E1123 15:00:48.255720 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 15:00:48 crc kubenswrapper[4718]: E1123 15:00:48.255732 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 15:00:48 crc kubenswrapper[4718]: E1123 15:00:48.255773 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift podName:ef94753b-867a-4e46-9ff8-66178f25efaa nodeName:}" failed. No retries permitted until 2025-11-23 15:00:52.25576036 +0000 UTC m=+903.495380204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift") pod "swift-storage-0" (UID: "ef94753b-867a-4e46-9ff8-66178f25efaa") : configmap "swift-ring-files" not found Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.346454 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.385959 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lhjc2"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.396090 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.409738 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-ht49g"] Nov 23 15:00:48 crc kubenswrapper[4718]: E1123 15:00:48.410139 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9976daf8-2e2d-4a74-b9ef-3c052cbe1972" containerName="init" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.410157 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9976daf8-2e2d-4a74-b9ef-3c052cbe1972" containerName="init" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.410430 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9976daf8-2e2d-4a74-b9ef-3c052cbe1972" containerName="init" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.411495 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.415208 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.441461 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-ht49g"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.460170 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.460217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-config\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.460246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xftpf\" (UniqueName: \"kubernetes.io/projected/f0f7d965-6d02-4df9-892d-87ba4ea7838c-kube-api-access-xftpf\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.460266 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.470742 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9976daf8-2e2d-4a74-b9ef-3c052cbe1972" path="/var/lib/kubelet/pods/9976daf8-2e2d-4a74-b9ef-3c052cbe1972/volumes" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.500786 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jfjhx"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.504153 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.507677 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.529269 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jfjhx"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565242 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565301 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-combined-ca-bundle\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565322 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2lx\" (UniqueName: \"kubernetes.io/projected/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-kube-api-access-6q2lx\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-config\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565375 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565458 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-config\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565545 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-ovn-rundir\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565582 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xftpf\" (UniqueName: \"kubernetes.io/projected/f0f7d965-6d02-4df9-892d-87ba4ea7838c-kube-api-access-xftpf\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565649 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.565810 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-ovs-rundir\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.566266 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.566264 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-config\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.568741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.586313 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xftpf\" (UniqueName: \"kubernetes.io/projected/f0f7d965-6d02-4df9-892d-87ba4ea7838c-kube-api-access-xftpf\") pod \"dnsmasq-dns-6c89d5d749-ht49g\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.666977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.667277 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2lx\" (UniqueName: \"kubernetes.io/projected/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-kube-api-access-6q2lx\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.667296 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-combined-ca-bundle\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.667386 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-config\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.667448 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-ovn-rundir\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.667526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-ovs-rundir\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.667744 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-ovs-rundir\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.667892 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-ovn-rundir\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.668950 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-config\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.671234 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-combined-ca-bundle\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.671536 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.683882 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2lx\" (UniqueName: \"kubernetes.io/projected/d8ae0875-d71a-40f8-8db0-6af6b7acd60f-kube-api-access-6q2lx\") pod \"ovn-controller-metrics-jfjhx\" (UID: \"d8ae0875-d71a-40f8-8db0-6af6b7acd60f\") " pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.720054 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-hspwc"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.749077 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.754886 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-xwqvg"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.756115 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.759774 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.779911 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xwqvg"] Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.833104 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jfjhx" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.870202 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.870256 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-dns-svc\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.870352 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-config\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.870617 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.870712 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frlsc\" (UniqueName: \"kubernetes.io/projected/176e4753-0fcc-464b-b49e-a4b52cf26b5f-kube-api-access-frlsc\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.972550 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.972609 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frlsc\" (UniqueName: \"kubernetes.io/projected/176e4753-0fcc-464b-b49e-a4b52cf26b5f-kube-api-access-frlsc\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.972688 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.972707 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-dns-svc\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.972733 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-config\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.973578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.973713 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.974121 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-dns-svc\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.974663 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-config\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:48 crc kubenswrapper[4718]: I1123 15:00:48.990675 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frlsc\" (UniqueName: \"kubernetes.io/projected/176e4753-0fcc-464b-b49e-a4b52cf26b5f-kube-api-access-frlsc\") pod \"dnsmasq-dns-698758b865-xwqvg\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.075664 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.117710 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.123694 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.314311 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.315665 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.318565 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.319274 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.319895 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-n7km8" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.325873 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.336273 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.379993 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.380034 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1696a6b-d5a7-403f-b9d0-168c0e42a937-scripts\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.380083 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.380103 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1696a6b-d5a7-403f-b9d0-168c0e42a937-config\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.380123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s557v\" (UniqueName: \"kubernetes.io/projected/f1696a6b-d5a7-403f-b9d0-168c0e42a937-kube-api-access-s557v\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.380286 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.380518 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1696a6b-d5a7-403f-b9d0-168c0e42a937-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.482508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.482561 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1696a6b-d5a7-403f-b9d0-168c0e42a937-config\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.482585 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s557v\" (UniqueName: \"kubernetes.io/projected/f1696a6b-d5a7-403f-b9d0-168c0e42a937-kube-api-access-s557v\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.482647 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.482723 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1696a6b-d5a7-403f-b9d0-168c0e42a937-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.483191 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1696a6b-d5a7-403f-b9d0-168c0e42a937-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.483243 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1696a6b-d5a7-403f-b9d0-168c0e42a937-scripts\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.483278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.483918 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1696a6b-d5a7-403f-b9d0-168c0e42a937-scripts\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.484199 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1696a6b-d5a7-403f-b9d0-168c0e42a937-config\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.486114 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.486398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.487308 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1696a6b-d5a7-403f-b9d0-168c0e42a937-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.498594 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s557v\" (UniqueName: \"kubernetes.io/projected/f1696a6b-d5a7-403f-b9d0-168c0e42a937-kube-api-access-s557v\") pod \"ovn-northd-0\" (UID: \"f1696a6b-d5a7-403f-b9d0-168c0e42a937\") " pod="openstack/ovn-northd-0" Nov 23 15:00:49 crc kubenswrapper[4718]: I1123 15:00:49.637081 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 15:00:50 crc kubenswrapper[4718]: I1123 15:00:50.082930 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" podUID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerName="dnsmasq-dns" containerID="cri-o://de44a6b05be6f9018e2b8192d2a225e5f3badeacd19b24c7761741f28c464dcd" gracePeriod=10 Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.100208 4718 generic.go:334] "Generic (PLEG): container finished" podID="adbd2274-f81b-4930-85c1-eec8a7a3790d" containerID="584e6cd1708050ccdca854136bb59d2cbc13f385cca80bac07738b0902671b3e" exitCode=0 Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.100283 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"adbd2274-f81b-4930-85c1-eec8a7a3790d","Type":"ContainerDied","Data":"584e6cd1708050ccdca854136bb59d2cbc13f385cca80bac07738b0902671b3e"} Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.104191 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" event={"ID":"561f5ecd-5dfc-47a0-be21-b89eaf87f469","Type":"ContainerDied","Data":"de44a6b05be6f9018e2b8192d2a225e5f3badeacd19b24c7761741f28c464dcd"} Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.104225 4718 generic.go:334] "Generic (PLEG): container finished" podID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerID="de44a6b05be6f9018e2b8192d2a225e5f3badeacd19b24c7761741f28c464dcd" exitCode=0 Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.106702 4718 generic.go:334] "Generic (PLEG): container finished" podID="da1e3b17-14ea-456e-a694-073e8fd4edaf" containerID="3867c79ccedd71cf6d3bbaec5d58d590eb5bde5597e35576f7018b302c0118b7" exitCode=0 Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.106741 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"da1e3b17-14ea-456e-a694-073e8fd4edaf","Type":"ContainerDied","Data":"3867c79ccedd71cf6d3bbaec5d58d590eb5bde5597e35576f7018b302c0118b7"} Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.260992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:00:52 crc kubenswrapper[4718]: E1123 15:00:52.261322 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 15:00:52 crc kubenswrapper[4718]: E1123 15:00:52.261347 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 15:00:52 crc kubenswrapper[4718]: E1123 15:00:52.261411 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift podName:ef94753b-867a-4e46-9ff8-66178f25efaa nodeName:}" failed. No retries permitted until 2025-11-23 15:01:00.261388869 +0000 UTC m=+911.501008753 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift") pod "swift-storage-0" (UID: "ef94753b-867a-4e46-9ff8-66178f25efaa") : configmap "swift-ring-files" not found Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.726523 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.769002 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-dns-svc\") pod \"32168efd-2782-4ed5-9422-8c77a0a04955\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.769133 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-config\") pod \"32168efd-2782-4ed5-9422-8c77a0a04955\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.769278 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vsn2\" (UniqueName: \"kubernetes.io/projected/32168efd-2782-4ed5-9422-8c77a0a04955-kube-api-access-5vsn2\") pod \"32168efd-2782-4ed5-9422-8c77a0a04955\" (UID: \"32168efd-2782-4ed5-9422-8c77a0a04955\") " Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.791878 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32168efd-2782-4ed5-9422-8c77a0a04955-kube-api-access-5vsn2" (OuterVolumeSpecName: "kube-api-access-5vsn2") pod "32168efd-2782-4ed5-9422-8c77a0a04955" (UID: "32168efd-2782-4ed5-9422-8c77a0a04955"). InnerVolumeSpecName "kube-api-access-5vsn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.830356 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-config" (OuterVolumeSpecName: "config") pod "32168efd-2782-4ed5-9422-8c77a0a04955" (UID: "32168efd-2782-4ed5-9422-8c77a0a04955"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.843404 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32168efd-2782-4ed5-9422-8c77a0a04955" (UID: "32168efd-2782-4ed5-9422-8c77a0a04955"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.878741 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.878802 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vsn2\" (UniqueName: \"kubernetes.io/projected/32168efd-2782-4ed5-9422-8c77a0a04955-kube-api-access-5vsn2\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.878816 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32168efd-2782-4ed5-9422-8c77a0a04955-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:52 crc kubenswrapper[4718]: I1123 15:00:52.991342 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.053710 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.053773 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.084285 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4fpn\" (UniqueName: \"kubernetes.io/projected/561f5ecd-5dfc-47a0-be21-b89eaf87f469-kube-api-access-w4fpn\") pod \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.084348 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-dns-svc\") pod \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.084575 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-config\") pod \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\" (UID: \"561f5ecd-5dfc-47a0-be21-b89eaf87f469\") " Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.096637 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561f5ecd-5dfc-47a0-be21-b89eaf87f469-kube-api-access-w4fpn" (OuterVolumeSpecName: "kube-api-access-w4fpn") pod "561f5ecd-5dfc-47a0-be21-b89eaf87f469" (UID: "561f5ecd-5dfc-47a0-be21-b89eaf87f469"). InnerVolumeSpecName "kube-api-access-w4fpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.126043 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-config" (OuterVolumeSpecName: "config") pod "561f5ecd-5dfc-47a0-be21-b89eaf87f469" (UID: "561f5ecd-5dfc-47a0-be21-b89eaf87f469"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.132612 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.132834 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "561f5ecd-5dfc-47a0-be21-b89eaf87f469" (UID: "561f5ecd-5dfc-47a0-be21-b89eaf87f469"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.132889 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-hspwc" event={"ID":"561f5ecd-5dfc-47a0-be21-b89eaf87f469","Type":"ContainerDied","Data":"4e49379f51ebfd27c42182b643916d9e4419243acb094d14dbb29439c4662359"} Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.132932 4718 scope.go:117] "RemoveContainer" containerID="de44a6b05be6f9018e2b8192d2a225e5f3badeacd19b24c7761741f28c464dcd" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.138880 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sx654" event={"ID":"79cae030-375c-4b0e-9dfc-e823f922196b","Type":"ContainerStarted","Data":"7646a021bb1d83549c239d1c5fc00decec93a0a7ccef6af097ee5f982aaa45a8"} Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.159203 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"da1e3b17-14ea-456e-a694-073e8fd4edaf","Type":"ContainerStarted","Data":"2a4b040051d1df49afd98cd916c079f527f7ebaf1e0fce82f4efbe0e3c5d151f"} Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.159524 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sx654" podStartSLOduration=2.997513588 podStartE2EDuration="9.159498898s" podCreationTimestamp="2025-11-23 15:00:44 +0000 UTC" firstStartedPulling="2025-11-23 15:00:46.637022402 +0000 UTC m=+897.876642246" lastFinishedPulling="2025-11-23 15:00:52.799007712 +0000 UTC m=+904.038627556" observedRunningTime="2025-11-23 15:00:53.155032437 +0000 UTC m=+904.394652281" watchObservedRunningTime="2025-11-23 15:00:53.159498898 +0000 UTC m=+904.399118742" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.161127 4718 scope.go:117] "RemoveContainer" containerID="294a98e65e415248031cd4139dd0fd120e44c4824787f344b57061998a132975" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.182690 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" event={"ID":"32168efd-2782-4ed5-9422-8c77a0a04955","Type":"ContainerDied","Data":"a090706bcd6151bb48eea223ccd583e956520cb35a0fe23e78f4c5a9f18f8091"} Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.182964 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lhjc2" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.192556 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=55.213417535 podStartE2EDuration="1m4.192538554s" podCreationTimestamp="2025-11-23 14:59:49 +0000 UTC" firstStartedPulling="2025-11-23 15:00:18.662502496 +0000 UTC m=+869.902122340" lastFinishedPulling="2025-11-23 15:00:27.641623485 +0000 UTC m=+878.881243359" observedRunningTime="2025-11-23 15:00:53.180805806 +0000 UTC m=+904.420425650" watchObservedRunningTime="2025-11-23 15:00:53.192538554 +0000 UTC m=+904.432158398" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.203995 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"adbd2274-f81b-4930-85c1-eec8a7a3790d","Type":"ContainerStarted","Data":"942e1dd619cb2e7a02068459982a2bd3d2460eadfe887f9307b7f9cb419aeab4"} Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.210299 4718 scope.go:117] "RemoveContainer" containerID="0e07fc4fbcdb0039567f783812286d7dccd0ce4271e52c50868c4a613c7d442c" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.213740 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4fpn\" (UniqueName: \"kubernetes.io/projected/561f5ecd-5dfc-47a0-be21-b89eaf87f469-kube-api-access-w4fpn\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.213788 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.213806 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561f5ecd-5dfc-47a0-be21-b89eaf87f469-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.270431 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xwqvg"] Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.303000 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jfjhx"] Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.326356 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-ht49g"] Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.341529 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=56.873300904 podStartE2EDuration="1m6.341513459s" podCreationTimestamp="2025-11-23 14:59:47 +0000 UTC" firstStartedPulling="2025-11-23 15:00:18.111304624 +0000 UTC m=+869.350924468" lastFinishedPulling="2025-11-23 15:00:27.579517179 +0000 UTC m=+878.819137023" observedRunningTime="2025-11-23 15:00:53.243634902 +0000 UTC m=+904.483254746" watchObservedRunningTime="2025-11-23 15:00:53.341513459 +0000 UTC m=+904.581133303" Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.358347 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lhjc2"] Nov 23 15:00:53 crc kubenswrapper[4718]: W1123 15:00:53.362938 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1696a6b_d5a7_403f_b9d0_168c0e42a937.slice/crio-97906392b490d6e0878b76bf87d3a629350f460c0ca92a9162bc945ee847f653 WatchSource:0}: Error finding container 97906392b490d6e0878b76bf87d3a629350f460c0ca92a9162bc945ee847f653: Status 404 returned error can't find the container with id 97906392b490d6e0878b76bf87d3a629350f460c0ca92a9162bc945ee847f653 Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.363434 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lhjc2"] Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.368330 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.471464 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-hspwc"] Nov 23 15:00:53 crc kubenswrapper[4718]: I1123 15:00:53.489423 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-hspwc"] Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.216909 4718 generic.go:334] "Generic (PLEG): container finished" podID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerID="aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd" exitCode=0 Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.217009 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" event={"ID":"f0f7d965-6d02-4df9-892d-87ba4ea7838c","Type":"ContainerDied","Data":"aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd"} Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.217297 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" event={"ID":"f0f7d965-6d02-4df9-892d-87ba4ea7838c","Type":"ContainerStarted","Data":"8b60e8e9f7e44328eef2f8ca2731df3333073a2871e4d592f0f46d5fa5a229b6"} Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.222701 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jfjhx" event={"ID":"d8ae0875-d71a-40f8-8db0-6af6b7acd60f","Type":"ContainerStarted","Data":"98784e282b1605ae48f6477e86bc3c9105437a5cebf6eff6b1b3ec84573a38e2"} Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.222739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jfjhx" event={"ID":"d8ae0875-d71a-40f8-8db0-6af6b7acd60f","Type":"ContainerStarted","Data":"cbccdab1089ac4c94ef38ccffc48eb6c65b08044eb0438c72d6fdab19ce82f04"} Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.235427 4718 generic.go:334] "Generic (PLEG): container finished" podID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerID="a42a3873e671d689611cd65624198cebe225bf2642691942d7c498bac4b4753e" exitCode=0 Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.235603 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xwqvg" event={"ID":"176e4753-0fcc-464b-b49e-a4b52cf26b5f","Type":"ContainerDied","Data":"a42a3873e671d689611cd65624198cebe225bf2642691942d7c498bac4b4753e"} Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.235678 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xwqvg" event={"ID":"176e4753-0fcc-464b-b49e-a4b52cf26b5f","Type":"ContainerStarted","Data":"2439336c868c3ca60f1bb92810989c92bb68332b2bf3717cf2a0c5bd2e7d56fc"} Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.239830 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f1696a6b-d5a7-403f-b9d0-168c0e42a937","Type":"ContainerStarted","Data":"97906392b490d6e0878b76bf87d3a629350f460c0ca92a9162bc945ee847f653"} Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.290283 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jfjhx" podStartSLOduration=6.290264851 podStartE2EDuration="6.290264851s" podCreationTimestamp="2025-11-23 15:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:00:54.274643047 +0000 UTC m=+905.514262911" watchObservedRunningTime="2025-11-23 15:00:54.290264851 +0000 UTC m=+905.529884695" Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.451192 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32168efd-2782-4ed5-9422-8c77a0a04955" path="/var/lib/kubelet/pods/32168efd-2782-4ed5-9422-8c77a0a04955/volumes" Nov 23 15:00:54 crc kubenswrapper[4718]: I1123 15:00:54.451865 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" path="/var/lib/kubelet/pods/561f5ecd-5dfc-47a0-be21-b89eaf87f469/volumes" Nov 23 15:00:55 crc kubenswrapper[4718]: I1123 15:00:55.251318 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" event={"ID":"f0f7d965-6d02-4df9-892d-87ba4ea7838c","Type":"ContainerStarted","Data":"b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638"} Nov 23 15:00:55 crc kubenswrapper[4718]: I1123 15:00:55.251620 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:55 crc kubenswrapper[4718]: I1123 15:00:55.253392 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xwqvg" event={"ID":"176e4753-0fcc-464b-b49e-a4b52cf26b5f","Type":"ContainerStarted","Data":"a5feb3db2ea81734ba314bc36324d8893ea0499b549fd108c1ec3f0f13b4e9c8"} Nov 23 15:00:55 crc kubenswrapper[4718]: I1123 15:00:55.253932 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:55 crc kubenswrapper[4718]: I1123 15:00:55.255267 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f1696a6b-d5a7-403f-b9d0-168c0e42a937","Type":"ContainerStarted","Data":"866c408df23b2646b92c0f5d60a2b59c81855ff9ed721a25b7343f275e7142ce"} Nov 23 15:00:55 crc kubenswrapper[4718]: I1123 15:00:55.272721 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" podStartSLOduration=7.27269689 podStartE2EDuration="7.27269689s" podCreationTimestamp="2025-11-23 15:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:00:55.266324497 +0000 UTC m=+906.505944351" watchObservedRunningTime="2025-11-23 15:00:55.27269689 +0000 UTC m=+906.512316734" Nov 23 15:00:55 crc kubenswrapper[4718]: I1123 15:00:55.290648 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-xwqvg" podStartSLOduration=7.290632828 podStartE2EDuration="7.290632828s" podCreationTimestamp="2025-11-23 15:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:00:55.287905924 +0000 UTC m=+906.527525808" watchObservedRunningTime="2025-11-23 15:00:55.290632828 +0000 UTC m=+906.530252672" Nov 23 15:00:56 crc kubenswrapper[4718]: I1123 15:00:56.266456 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f1696a6b-d5a7-403f-b9d0-168c0e42a937","Type":"ContainerStarted","Data":"84caa332afdb477256c0b09dc9283b5b2135953c4924c9d52cca0014ac11b26f"} Nov 23 15:00:56 crc kubenswrapper[4718]: I1123 15:00:56.291471 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.864913113 podStartE2EDuration="7.29145097s" podCreationTimestamp="2025-11-23 15:00:49 +0000 UTC" firstStartedPulling="2025-11-23 15:00:53.364847451 +0000 UTC m=+904.604467295" lastFinishedPulling="2025-11-23 15:00:54.791385308 +0000 UTC m=+906.031005152" observedRunningTime="2025-11-23 15:00:56.290192226 +0000 UTC m=+907.529812080" watchObservedRunningTime="2025-11-23 15:00:56.29145097 +0000 UTC m=+907.531070814" Nov 23 15:00:57 crc kubenswrapper[4718]: I1123 15:00:57.277311 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 23 15:00:59 crc kubenswrapper[4718]: I1123 15:00:59.119708 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:00:59 crc kubenswrapper[4718]: I1123 15:00:59.190048 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-ht49g"] Nov 23 15:00:59 crc kubenswrapper[4718]: I1123 15:00:59.190324 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" podUID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerName="dnsmasq-dns" containerID="cri-o://b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638" gracePeriod=10 Nov 23 15:00:59 crc kubenswrapper[4718]: I1123 15:00:59.191643 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:00:59 crc kubenswrapper[4718]: I1123 15:00:59.347371 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 23 15:00:59 crc kubenswrapper[4718]: I1123 15:00:59.347762 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.020617 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.108601 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.242631 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xftpf\" (UniqueName: \"kubernetes.io/projected/f0f7d965-6d02-4df9-892d-87ba4ea7838c-kube-api-access-xftpf\") pod \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.242816 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-dns-svc\") pod \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.242864 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-config\") pod \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.242911 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-ovsdbserver-sb\") pod \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\" (UID: \"f0f7d965-6d02-4df9-892d-87ba4ea7838c\") " Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.248870 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f7d965-6d02-4df9-892d-87ba4ea7838c-kube-api-access-xftpf" (OuterVolumeSpecName: "kube-api-access-xftpf") pod "f0f7d965-6d02-4df9-892d-87ba4ea7838c" (UID: "f0f7d965-6d02-4df9-892d-87ba4ea7838c"). InnerVolumeSpecName "kube-api-access-xftpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.286295 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0f7d965-6d02-4df9-892d-87ba4ea7838c" (UID: "f0f7d965-6d02-4df9-892d-87ba4ea7838c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.300188 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-config" (OuterVolumeSpecName: "config") pod "f0f7d965-6d02-4df9-892d-87ba4ea7838c" (UID: "f0f7d965-6d02-4df9-892d-87ba4ea7838c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.304383 4718 generic.go:334] "Generic (PLEG): container finished" podID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerID="b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638" exitCode=0 Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.304481 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.304608 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" event={"ID":"f0f7d965-6d02-4df9-892d-87ba4ea7838c","Type":"ContainerDied","Data":"b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638"} Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.304662 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-ht49g" event={"ID":"f0f7d965-6d02-4df9-892d-87ba4ea7838c","Type":"ContainerDied","Data":"8b60e8e9f7e44328eef2f8ca2731df3333073a2871e4d592f0f46d5fa5a229b6"} Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.304681 4718 scope.go:117] "RemoveContainer" containerID="b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.322084 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0f7d965-6d02-4df9-892d-87ba4ea7838c" (UID: "f0f7d965-6d02-4df9-892d-87ba4ea7838c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.345144 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.345236 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.345247 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.345255 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f7d965-6d02-4df9-892d-87ba4ea7838c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.345269 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xftpf\" (UniqueName: \"kubernetes.io/projected/f0f7d965-6d02-4df9-892d-87ba4ea7838c-kube-api-access-xftpf\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.345403 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.345451 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.345512 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift podName:ef94753b-867a-4e46-9ff8-66178f25efaa nodeName:}" failed. No retries permitted until 2025-11-23 15:01:16.345488571 +0000 UTC m=+927.585108515 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift") pod "swift-storage-0" (UID: "ef94753b-867a-4e46-9ff8-66178f25efaa") : configmap "swift-ring-files" not found Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.389905 4718 scope.go:117] "RemoveContainer" containerID="aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.400415 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.405120 4718 scope.go:117] "RemoveContainer" containerID="b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638" Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.405859 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638\": container with ID starting with b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638 not found: ID does not exist" containerID="b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.405905 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638"} err="failed to get container status \"b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638\": rpc error: code = NotFound desc = could not find container \"b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638\": container with ID starting with b5bcb62d65a84be7e8db52791d312ce3d299641be54e348423ce61da8571e638 not found: ID does not exist" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.405932 4718 scope.go:117] "RemoveContainer" containerID="aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd" Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.406293 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd\": container with ID starting with aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd not found: ID does not exist" containerID="aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.406342 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd"} err="failed to get container status \"aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd\": rpc error: code = NotFound desc = could not find container \"aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd\": container with ID starting with aa65cc55df0206d72c37d13a30a15fe38af9f6721c854794adcab256657ab8dd not found: ID does not exist" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.626913 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-ht49g"] Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.636576 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-ht49g"] Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.723496 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.724604 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.735729 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-24bf-account-create-h2qc7"] Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.736034 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32168efd-2782-4ed5-9422-8c77a0a04955" containerName="init" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736045 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="32168efd-2782-4ed5-9422-8c77a0a04955" containerName="init" Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.736063 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerName="init" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736069 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerName="init" Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.736086 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerName="dnsmasq-dns" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736092 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerName="dnsmasq-dns" Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.736110 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerName="init" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736116 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerName="init" Nov 23 15:01:00 crc kubenswrapper[4718]: E1123 15:01:00.736129 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerName="dnsmasq-dns" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736135 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerName="dnsmasq-dns" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736289 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="561f5ecd-5dfc-47a0-be21-b89eaf87f469" containerName="dnsmasq-dns" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736304 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="32168efd-2782-4ed5-9422-8c77a0a04955" containerName="init" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736314 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" containerName="dnsmasq-dns" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.736860 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.748568 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.754109 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-24bf-account-create-h2qc7"] Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.852246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8693ac-cbb7-4468-8600-32ef277c0db1-operator-scripts\") pod \"keystone-24bf-account-create-h2qc7\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.852751 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvpz\" (UniqueName: \"kubernetes.io/projected/bc8693ac-cbb7-4468-8600-32ef277c0db1-kube-api-access-hmvpz\") pod \"keystone-24bf-account-create-h2qc7\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.891481 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9b42d"] Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.892476 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9b42d" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.901345 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9b42d"] Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.953762 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmvpz\" (UniqueName: \"kubernetes.io/projected/bc8693ac-cbb7-4468-8600-32ef277c0db1-kube-api-access-hmvpz\") pod \"keystone-24bf-account-create-h2qc7\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.953819 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8693ac-cbb7-4468-8600-32ef277c0db1-operator-scripts\") pod \"keystone-24bf-account-create-h2qc7\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.954549 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8693ac-cbb7-4468-8600-32ef277c0db1-operator-scripts\") pod \"keystone-24bf-account-create-h2qc7\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:00 crc kubenswrapper[4718]: I1123 15:01:00.977703 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmvpz\" (UniqueName: \"kubernetes.io/projected/bc8693ac-cbb7-4468-8600-32ef277c0db1-kube-api-access-hmvpz\") pod \"keystone-24bf-account-create-h2qc7\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.040710 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-144d-account-create-7wszp"] Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.043415 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.046368 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.049007 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-144d-account-create-7wszp"] Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.055274 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a426f26e-d114-4570-81da-3be0c4aca095-operator-scripts\") pod \"placement-db-create-9b42d\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " pod="openstack/placement-db-create-9b42d" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.055429 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgbjg\" (UniqueName: \"kubernetes.io/projected/a426f26e-d114-4570-81da-3be0c4aca095-kube-api-access-cgbjg\") pod \"placement-db-create-9b42d\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " pod="openstack/placement-db-create-9b42d" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.056832 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.156975 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a426f26e-d114-4570-81da-3be0c4aca095-operator-scripts\") pod \"placement-db-create-9b42d\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " pod="openstack/placement-db-create-9b42d" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.157702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k95l\" (UniqueName: \"kubernetes.io/projected/f4073833-fe54-4322-9dd2-63d98b5f9788-kube-api-access-9k95l\") pod \"placement-144d-account-create-7wszp\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.157818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgbjg\" (UniqueName: \"kubernetes.io/projected/a426f26e-d114-4570-81da-3be0c4aca095-kube-api-access-cgbjg\") pod \"placement-db-create-9b42d\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " pod="openstack/placement-db-create-9b42d" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.157895 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4073833-fe54-4322-9dd2-63d98b5f9788-operator-scripts\") pod \"placement-144d-account-create-7wszp\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.158483 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a426f26e-d114-4570-81da-3be0c4aca095-operator-scripts\") pod \"placement-db-create-9b42d\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " pod="openstack/placement-db-create-9b42d" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.174746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgbjg\" (UniqueName: \"kubernetes.io/projected/a426f26e-d114-4570-81da-3be0c4aca095-kube-api-access-cgbjg\") pod \"placement-db-create-9b42d\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " pod="openstack/placement-db-create-9b42d" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.207907 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9b42d" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.262775 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4073833-fe54-4322-9dd2-63d98b5f9788-operator-scripts\") pod \"placement-144d-account-create-7wszp\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.262923 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k95l\" (UniqueName: \"kubernetes.io/projected/f4073833-fe54-4322-9dd2-63d98b5f9788-kube-api-access-9k95l\") pod \"placement-144d-account-create-7wszp\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.263433 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4073833-fe54-4322-9dd2-63d98b5f9788-operator-scripts\") pod \"placement-144d-account-create-7wszp\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.279629 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k95l\" (UniqueName: \"kubernetes.io/projected/f4073833-fe54-4322-9dd2-63d98b5f9788-kube-api-access-9k95l\") pod \"placement-144d-account-create-7wszp\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.329364 4718 generic.go:334] "Generic (PLEG): container finished" podID="177531fd-3e9f-43b3-9540-a1a59957523e" containerID="7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2" exitCode=0 Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.329493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"177531fd-3e9f-43b3-9540-a1a59957523e","Type":"ContainerDied","Data":"7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2"} Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.339404 4718 generic.go:334] "Generic (PLEG): container finished" podID="79cae030-375c-4b0e-9dfc-e823f922196b" containerID="7646a021bb1d83549c239d1c5fc00decec93a0a7ccef6af097ee5f982aaa45a8" exitCode=0 Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.340223 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sx654" event={"ID":"79cae030-375c-4b0e-9dfc-e823f922196b","Type":"ContainerDied","Data":"7646a021bb1d83549c239d1c5fc00decec93a0a7ccef6af097ee5f982aaa45a8"} Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.418708 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.485873 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-24bf-account-create-h2qc7"] Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.690462 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9b42d"] Nov 23 15:01:01 crc kubenswrapper[4718]: W1123 15:01:01.694670 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda426f26e_d114_4570_81da_3be0c4aca095.slice/crio-5369fa7ac22d8e4fdea27945b9f148276609dca28fac2396e46220ec801c930c WatchSource:0}: Error finding container 5369fa7ac22d8e4fdea27945b9f148276609dca28fac2396e46220ec801c930c: Status 404 returned error can't find the container with id 5369fa7ac22d8e4fdea27945b9f148276609dca28fac2396e46220ec801c930c Nov 23 15:01:01 crc kubenswrapper[4718]: I1123 15:01:01.835771 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-144d-account-create-7wszp"] Nov 23 15:01:01 crc kubenswrapper[4718]: W1123 15:01:01.839119 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4073833_fe54_4322_9dd2_63d98b5f9788.slice/crio-4329b35d02f6510510343e7325c0caa3635d65c5deeb5b74469bed819c86f3f7 WatchSource:0}: Error finding container 4329b35d02f6510510343e7325c0caa3635d65c5deeb5b74469bed819c86f3f7: Status 404 returned error can't find the container with id 4329b35d02f6510510343e7325c0caa3635d65c5deeb5b74469bed819c86f3f7 Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.347127 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc8693ac-cbb7-4468-8600-32ef277c0db1" containerID="180c6340ecdcd4d2712957cc955e852c2bdc244eb5f606c103eed1fe4d7e0044" exitCode=0 Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.347229 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-24bf-account-create-h2qc7" event={"ID":"bc8693ac-cbb7-4468-8600-32ef277c0db1","Type":"ContainerDied","Data":"180c6340ecdcd4d2712957cc955e852c2bdc244eb5f606c103eed1fe4d7e0044"} Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.347475 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-24bf-account-create-h2qc7" event={"ID":"bc8693ac-cbb7-4468-8600-32ef277c0db1","Type":"ContainerStarted","Data":"c6883a5785899acdc8396b524817032e4805cc382daeabcc6686b2e165c1ac05"} Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.349047 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4073833-fe54-4322-9dd2-63d98b5f9788" containerID="02ef4ad92caf0fc426522862706826d2be6222f8dbce6c94c9b6c25c43e775e7" exitCode=0 Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.349139 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-144d-account-create-7wszp" event={"ID":"f4073833-fe54-4322-9dd2-63d98b5f9788","Type":"ContainerDied","Data":"02ef4ad92caf0fc426522862706826d2be6222f8dbce6c94c9b6c25c43e775e7"} Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.349172 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-144d-account-create-7wszp" event={"ID":"f4073833-fe54-4322-9dd2-63d98b5f9788","Type":"ContainerStarted","Data":"4329b35d02f6510510343e7325c0caa3635d65c5deeb5b74469bed819c86f3f7"} Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.351306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"177531fd-3e9f-43b3-9540-a1a59957523e","Type":"ContainerStarted","Data":"fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f"} Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.351498 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.355140 4718 generic.go:334] "Generic (PLEG): container finished" podID="a426f26e-d114-4570-81da-3be0c4aca095" containerID="5c87f207db05d688db35a5f691bc2a6d98780957b66a0b1a80a2535fbdad52eb" exitCode=0 Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.355180 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9b42d" event={"ID":"a426f26e-d114-4570-81da-3be0c4aca095","Type":"ContainerDied","Data":"5c87f207db05d688db35a5f691bc2a6d98780957b66a0b1a80a2535fbdad52eb"} Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.355203 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9b42d" event={"ID":"a426f26e-d114-4570-81da-3be0c4aca095","Type":"ContainerStarted","Data":"5369fa7ac22d8e4fdea27945b9f148276609dca28fac2396e46220ec801c930c"} Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.401214 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.99965076 podStartE2EDuration="1m16.401195617s" podCreationTimestamp="2025-11-23 14:59:46 +0000 UTC" firstStartedPulling="2025-11-23 14:59:51.591037344 +0000 UTC m=+842.830657188" lastFinishedPulling="2025-11-23 15:00:27.992582201 +0000 UTC m=+879.232202045" observedRunningTime="2025-11-23 15:01:02.398065522 +0000 UTC m=+913.637685376" watchObservedRunningTime="2025-11-23 15:01:02.401195617 +0000 UTC m=+913.640815461" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.449187 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f7d965-6d02-4df9-892d-87ba4ea7838c" path="/var/lib/kubelet/pods/f0f7d965-6d02-4df9-892d-87ba4ea7838c/volumes" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.604138 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dc8hm" podUID="65b3425b-bedb-4274-a600-091b1910a2d7" containerName="ovn-controller" probeResult="failure" output=< Nov 23 15:01:02 crc kubenswrapper[4718]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 15:01:02 crc kubenswrapper[4718]: > Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.711660 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.738592 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-86mpq" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.740718 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-86mpq" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.787513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-scripts\") pod \"79cae030-375c-4b0e-9dfc-e823f922196b\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.787579 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-swiftconf\") pod \"79cae030-375c-4b0e-9dfc-e823f922196b\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.787606 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-combined-ca-bundle\") pod \"79cae030-375c-4b0e-9dfc-e823f922196b\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.787682 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-dispersionconf\") pod \"79cae030-375c-4b0e-9dfc-e823f922196b\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.787716 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpsnl\" (UniqueName: \"kubernetes.io/projected/79cae030-375c-4b0e-9dfc-e823f922196b-kube-api-access-qpsnl\") pod \"79cae030-375c-4b0e-9dfc-e823f922196b\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.787738 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-ring-data-devices\") pod \"79cae030-375c-4b0e-9dfc-e823f922196b\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.787776 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79cae030-375c-4b0e-9dfc-e823f922196b-etc-swift\") pod \"79cae030-375c-4b0e-9dfc-e823f922196b\" (UID: \"79cae030-375c-4b0e-9dfc-e823f922196b\") " Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.788525 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "79cae030-375c-4b0e-9dfc-e823f922196b" (UID: "79cae030-375c-4b0e-9dfc-e823f922196b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.788867 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79cae030-375c-4b0e-9dfc-e823f922196b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "79cae030-375c-4b0e-9dfc-e823f922196b" (UID: "79cae030-375c-4b0e-9dfc-e823f922196b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.796764 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cae030-375c-4b0e-9dfc-e823f922196b-kube-api-access-qpsnl" (OuterVolumeSpecName: "kube-api-access-qpsnl") pod "79cae030-375c-4b0e-9dfc-e823f922196b" (UID: "79cae030-375c-4b0e-9dfc-e823f922196b"). InnerVolumeSpecName "kube-api-access-qpsnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.797006 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "79cae030-375c-4b0e-9dfc-e823f922196b" (UID: "79cae030-375c-4b0e-9dfc-e823f922196b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.810854 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "79cae030-375c-4b0e-9dfc-e823f922196b" (UID: "79cae030-375c-4b0e-9dfc-e823f922196b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.813702 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79cae030-375c-4b0e-9dfc-e823f922196b" (UID: "79cae030-375c-4b0e-9dfc-e823f922196b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.814035 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-scripts" (OuterVolumeSpecName: "scripts") pod "79cae030-375c-4b0e-9dfc-e823f922196b" (UID: "79cae030-375c-4b0e-9dfc-e823f922196b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.890352 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.890396 4718 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.890408 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.890417 4718 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79cae030-375c-4b0e-9dfc-e823f922196b-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.890427 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpsnl\" (UniqueName: \"kubernetes.io/projected/79cae030-375c-4b0e-9dfc-e823f922196b-kube-api-access-qpsnl\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.890458 4718 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79cae030-375c-4b0e-9dfc-e823f922196b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.890470 4718 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79cae030-375c-4b0e-9dfc-e823f922196b-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.910665 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.976872 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dc8hm-config-dq2l7"] Nov 23 15:01:02 crc kubenswrapper[4718]: E1123 15:01:02.977331 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cae030-375c-4b0e-9dfc-e823f922196b" containerName="swift-ring-rebalance" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.977365 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cae030-375c-4b0e-9dfc-e823f922196b" containerName="swift-ring-rebalance" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.977586 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cae030-375c-4b0e-9dfc-e823f922196b" containerName="swift-ring-rebalance" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.978256 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:02 crc kubenswrapper[4718]: I1123 15:01:02.981485 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.003558 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc8hm-config-dq2l7"] Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.030188 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.112693 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8kb5\" (UniqueName: \"kubernetes.io/projected/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-kube-api-access-k8kb5\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.112732 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-scripts\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.112798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.112838 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-log-ovn\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.112856 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-additional-scripts\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.112884 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run-ovn\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214131 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run-ovn\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214242 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8kb5\" (UniqueName: \"kubernetes.io/projected/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-kube-api-access-k8kb5\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214264 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-scripts\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214300 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214340 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-log-ovn\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214354 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-additional-scripts\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214484 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-log-ovn\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214508 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.214524 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run-ovn\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.215067 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-additional-scripts\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.216883 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-scripts\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.232398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8kb5\" (UniqueName: \"kubernetes.io/projected/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-kube-api-access-k8kb5\") pod \"ovn-controller-dc8hm-config-dq2l7\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.331375 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.369880 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sx654" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.371493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sx654" event={"ID":"79cae030-375c-4b0e-9dfc-e823f922196b","Type":"ContainerDied","Data":"952233de002cfde8cca90b1c48abf3b74e15246e36443b57446452962fd5e06d"} Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.371532 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952233de002cfde8cca90b1c48abf3b74e15246e36443b57446452962fd5e06d" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.747691 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9b42d" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.784161 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.799644 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.822524 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgbjg\" (UniqueName: \"kubernetes.io/projected/a426f26e-d114-4570-81da-3be0c4aca095-kube-api-access-cgbjg\") pod \"a426f26e-d114-4570-81da-3be0c4aca095\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.822591 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a426f26e-d114-4570-81da-3be0c4aca095-operator-scripts\") pod \"a426f26e-d114-4570-81da-3be0c4aca095\" (UID: \"a426f26e-d114-4570-81da-3be0c4aca095\") " Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.823399 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a426f26e-d114-4570-81da-3be0c4aca095-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a426f26e-d114-4570-81da-3be0c4aca095" (UID: "a426f26e-d114-4570-81da-3be0c4aca095"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.827449 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a426f26e-d114-4570-81da-3be0c4aca095-kube-api-access-cgbjg" (OuterVolumeSpecName: "kube-api-access-cgbjg") pod "a426f26e-d114-4570-81da-3be0c4aca095" (UID: "a426f26e-d114-4570-81da-3be0c4aca095"). InnerVolumeSpecName "kube-api-access-cgbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.924508 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmvpz\" (UniqueName: \"kubernetes.io/projected/bc8693ac-cbb7-4468-8600-32ef277c0db1-kube-api-access-hmvpz\") pod \"bc8693ac-cbb7-4468-8600-32ef277c0db1\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.924591 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k95l\" (UniqueName: \"kubernetes.io/projected/f4073833-fe54-4322-9dd2-63d98b5f9788-kube-api-access-9k95l\") pod \"f4073833-fe54-4322-9dd2-63d98b5f9788\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.924641 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4073833-fe54-4322-9dd2-63d98b5f9788-operator-scripts\") pod \"f4073833-fe54-4322-9dd2-63d98b5f9788\" (UID: \"f4073833-fe54-4322-9dd2-63d98b5f9788\") " Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.924738 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8693ac-cbb7-4468-8600-32ef277c0db1-operator-scripts\") pod \"bc8693ac-cbb7-4468-8600-32ef277c0db1\" (UID: \"bc8693ac-cbb7-4468-8600-32ef277c0db1\") " Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.925155 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4073833-fe54-4322-9dd2-63d98b5f9788-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4073833-fe54-4322-9dd2-63d98b5f9788" (UID: "f4073833-fe54-4322-9dd2-63d98b5f9788"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.925322 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4073833-fe54-4322-9dd2-63d98b5f9788-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.925342 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgbjg\" (UniqueName: \"kubernetes.io/projected/a426f26e-d114-4570-81da-3be0c4aca095-kube-api-access-cgbjg\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.925354 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a426f26e-d114-4570-81da-3be0c4aca095-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.925575 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8693ac-cbb7-4468-8600-32ef277c0db1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc8693ac-cbb7-4468-8600-32ef277c0db1" (UID: "bc8693ac-cbb7-4468-8600-32ef277c0db1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.927365 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8693ac-cbb7-4468-8600-32ef277c0db1-kube-api-access-hmvpz" (OuterVolumeSpecName: "kube-api-access-hmvpz") pod "bc8693ac-cbb7-4468-8600-32ef277c0db1" (UID: "bc8693ac-cbb7-4468-8600-32ef277c0db1"). InnerVolumeSpecName "kube-api-access-hmvpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.927579 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4073833-fe54-4322-9dd2-63d98b5f9788-kube-api-access-9k95l" (OuterVolumeSpecName: "kube-api-access-9k95l") pod "f4073833-fe54-4322-9dd2-63d98b5f9788" (UID: "f4073833-fe54-4322-9dd2-63d98b5f9788"). InnerVolumeSpecName "kube-api-access-9k95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:03 crc kubenswrapper[4718]: I1123 15:01:03.946024 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc8hm-config-dq2l7"] Nov 23 15:01:03 crc kubenswrapper[4718]: W1123 15:01:03.949678 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14fd5dfa_3946_4d41_afc9_e2c8fbb44570.slice/crio-c6bcb13e266bb6a0eaabeb7890676cfdb76f95b3453007237c72d00713280a96 WatchSource:0}: Error finding container c6bcb13e266bb6a0eaabeb7890676cfdb76f95b3453007237c72d00713280a96: Status 404 returned error can't find the container with id c6bcb13e266bb6a0eaabeb7890676cfdb76f95b3453007237c72d00713280a96 Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.027130 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmvpz\" (UniqueName: \"kubernetes.io/projected/bc8693ac-cbb7-4468-8600-32ef277c0db1-kube-api-access-hmvpz\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.027166 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k95l\" (UniqueName: \"kubernetes.io/projected/f4073833-fe54-4322-9dd2-63d98b5f9788-kube-api-access-9k95l\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.027175 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8693ac-cbb7-4468-8600-32ef277c0db1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.377717 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-24bf-account-create-h2qc7" event={"ID":"bc8693ac-cbb7-4468-8600-32ef277c0db1","Type":"ContainerDied","Data":"c6883a5785899acdc8396b524817032e4805cc382daeabcc6686b2e165c1ac05"} Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.377752 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-24bf-account-create-h2qc7" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.377764 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6883a5785899acdc8396b524817032e4805cc382daeabcc6686b2e165c1ac05" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.379356 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-144d-account-create-7wszp" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.379423 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-144d-account-create-7wszp" event={"ID":"f4073833-fe54-4322-9dd2-63d98b5f9788","Type":"ContainerDied","Data":"4329b35d02f6510510343e7325c0caa3635d65c5deeb5b74469bed819c86f3f7"} Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.379524 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4329b35d02f6510510343e7325c0caa3635d65c5deeb5b74469bed819c86f3f7" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.381758 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9b42d" event={"ID":"a426f26e-d114-4570-81da-3be0c4aca095","Type":"ContainerDied","Data":"5369fa7ac22d8e4fdea27945b9f148276609dca28fac2396e46220ec801c930c"} Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.381797 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5369fa7ac22d8e4fdea27945b9f148276609dca28fac2396e46220ec801c930c" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.381818 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9b42d" Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.383695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc8hm-config-dq2l7" event={"ID":"14fd5dfa-3946-4d41-afc9-e2c8fbb44570","Type":"ContainerStarted","Data":"613c99dd8879460c8944a9074c031bda1a4a28442f8d718fc1b0f8964c7eaef7"} Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.383739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc8hm-config-dq2l7" event={"ID":"14fd5dfa-3946-4d41-afc9-e2c8fbb44570","Type":"ContainerStarted","Data":"c6bcb13e266bb6a0eaabeb7890676cfdb76f95b3453007237c72d00713280a96"} Nov 23 15:01:04 crc kubenswrapper[4718]: I1123 15:01:04.449839 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dc8hm-config-dq2l7" podStartSLOduration=2.449821609 podStartE2EDuration="2.449821609s" podCreationTimestamp="2025-11-23 15:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:04.447393824 +0000 UTC m=+915.687013668" watchObservedRunningTime="2025-11-23 15:01:04.449821609 +0000 UTC m=+915.689441453" Nov 23 15:01:05 crc kubenswrapper[4718]: I1123 15:01:05.394770 4718 generic.go:334] "Generic (PLEG): container finished" podID="14fd5dfa-3946-4d41-afc9-e2c8fbb44570" containerID="613c99dd8879460c8944a9074c031bda1a4a28442f8d718fc1b0f8964c7eaef7" exitCode=0 Nov 23 15:01:05 crc kubenswrapper[4718]: I1123 15:01:05.394868 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc8hm-config-dq2l7" event={"ID":"14fd5dfa-3946-4d41-afc9-e2c8fbb44570","Type":"ContainerDied","Data":"613c99dd8879460c8944a9074c031bda1a4a28442f8d718fc1b0f8964c7eaef7"} Nov 23 15:01:05 crc kubenswrapper[4718]: I1123 15:01:05.397413 4718 generic.go:334] "Generic (PLEG): container finished" podID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerID="d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c" exitCode=0 Nov 23 15:01:05 crc kubenswrapper[4718]: I1123 15:01:05.397471 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed","Type":"ContainerDied","Data":"d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c"} Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.126680 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rbg79"] Nov 23 15:01:06 crc kubenswrapper[4718]: E1123 15:01:06.127194 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4073833-fe54-4322-9dd2-63d98b5f9788" containerName="mariadb-account-create" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.127210 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4073833-fe54-4322-9dd2-63d98b5f9788" containerName="mariadb-account-create" Nov 23 15:01:06 crc kubenswrapper[4718]: E1123 15:01:06.127219 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a426f26e-d114-4570-81da-3be0c4aca095" containerName="mariadb-database-create" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.127224 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a426f26e-d114-4570-81da-3be0c4aca095" containerName="mariadb-database-create" Nov 23 15:01:06 crc kubenswrapper[4718]: E1123 15:01:06.127250 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8693ac-cbb7-4468-8600-32ef277c0db1" containerName="mariadb-account-create" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.127256 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8693ac-cbb7-4468-8600-32ef277c0db1" containerName="mariadb-account-create" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.127393 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4073833-fe54-4322-9dd2-63d98b5f9788" containerName="mariadb-account-create" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.127404 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8693ac-cbb7-4468-8600-32ef277c0db1" containerName="mariadb-account-create" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.127418 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a426f26e-d114-4570-81da-3be0c4aca095" containerName="mariadb-database-create" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.128477 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.180819 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rbg79"] Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.253970 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7fcb-account-create-w9xds"] Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.255264 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.258650 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.264316 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fcb-account-create-w9xds"] Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.267073 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fx5v\" (UniqueName: \"kubernetes.io/projected/5b125dda-4aa9-4a44-a714-f553fa853648-kube-api-access-8fx5v\") pod \"glance-db-create-rbg79\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.267251 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125dda-4aa9-4a44-a714-f553fa853648-operator-scripts\") pod \"glance-db-create-rbg79\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.369032 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7zmr\" (UniqueName: \"kubernetes.io/projected/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-kube-api-access-q7zmr\") pod \"glance-7fcb-account-create-w9xds\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.369080 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125dda-4aa9-4a44-a714-f553fa853648-operator-scripts\") pod \"glance-db-create-rbg79\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.369179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-operator-scripts\") pod \"glance-7fcb-account-create-w9xds\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.369229 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fx5v\" (UniqueName: \"kubernetes.io/projected/5b125dda-4aa9-4a44-a714-f553fa853648-kube-api-access-8fx5v\") pod \"glance-db-create-rbg79\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.369690 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125dda-4aa9-4a44-a714-f553fa853648-operator-scripts\") pod \"glance-db-create-rbg79\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.385145 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fx5v\" (UniqueName: \"kubernetes.io/projected/5b125dda-4aa9-4a44-a714-f553fa853648-kube-api-access-8fx5v\") pod \"glance-db-create-rbg79\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.406980 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed","Type":"ContainerStarted","Data":"5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa"} Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.407153 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.444045 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371956.410751 podStartE2EDuration="1m20.444024163s" podCreationTimestamp="2025-11-23 14:59:46 +0000 UTC" firstStartedPulling="2025-11-23 14:59:54.008312317 +0000 UTC m=+845.247932201" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:06.433209248 +0000 UTC m=+917.672829102" watchObservedRunningTime="2025-11-23 15:01:06.444024163 +0000 UTC m=+917.683644017" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.444991 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rbg79" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.470581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-operator-scripts\") pod \"glance-7fcb-account-create-w9xds\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.471155 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7zmr\" (UniqueName: \"kubernetes.io/projected/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-kube-api-access-q7zmr\") pod \"glance-7fcb-account-create-w9xds\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.471469 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-operator-scripts\") pod \"glance-7fcb-account-create-w9xds\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.504583 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7zmr\" (UniqueName: \"kubernetes.io/projected/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-kube-api-access-q7zmr\") pod \"glance-7fcb-account-create-w9xds\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.575526 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.778872 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.880936 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-log-ovn\") pod \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.881018 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "14fd5dfa-3946-4d41-afc9-e2c8fbb44570" (UID: "14fd5dfa-3946-4d41-afc9-e2c8fbb44570"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.881765 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8kb5\" (UniqueName: \"kubernetes.io/projected/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-kube-api-access-k8kb5\") pod \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.881805 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-scripts\") pod \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.881912 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run\") pod \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.881942 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run-ovn\") pod \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.881989 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-additional-scripts\") pod \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\" (UID: \"14fd5dfa-3946-4d41-afc9-e2c8fbb44570\") " Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.882238 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run" (OuterVolumeSpecName: "var-run") pod "14fd5dfa-3946-4d41-afc9-e2c8fbb44570" (UID: "14fd5dfa-3946-4d41-afc9-e2c8fbb44570"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.882373 4718 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.882391 4718 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.882414 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "14fd5dfa-3946-4d41-afc9-e2c8fbb44570" (UID: "14fd5dfa-3946-4d41-afc9-e2c8fbb44570"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.882731 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "14fd5dfa-3946-4d41-afc9-e2c8fbb44570" (UID: "14fd5dfa-3946-4d41-afc9-e2c8fbb44570"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.882926 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-scripts" (OuterVolumeSpecName: "scripts") pod "14fd5dfa-3946-4d41-afc9-e2c8fbb44570" (UID: "14fd5dfa-3946-4d41-afc9-e2c8fbb44570"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.885654 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-kube-api-access-k8kb5" (OuterVolumeSpecName: "kube-api-access-k8kb5") pod "14fd5dfa-3946-4d41-afc9-e2c8fbb44570" (UID: "14fd5dfa-3946-4d41-afc9-e2c8fbb44570"). InnerVolumeSpecName "kube-api-access-k8kb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.979402 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rbg79"] Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.983349 4718 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.983369 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8kb5\" (UniqueName: \"kubernetes.io/projected/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-kube-api-access-k8kb5\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.983381 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:06 crc kubenswrapper[4718]: I1123 15:01:06.983389 4718 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14fd5dfa-3946-4d41-afc9-e2c8fbb44570-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.120018 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fcb-account-create-w9xds"] Nov 23 15:01:07 crc kubenswrapper[4718]: W1123 15:01:07.133276 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bfa5922_5bb7_4b2c_a5ca_7349e85289a9.slice/crio-fda7be15d00a4bdc2e7617e716dc8516aabce5335273b1ea3b1261518745783f WatchSource:0}: Error finding container fda7be15d00a4bdc2e7617e716dc8516aabce5335273b1ea3b1261518745783f: Status 404 returned error can't find the container with id fda7be15d00a4bdc2e7617e716dc8516aabce5335273b1ea3b1261518745783f Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.419237 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc8hm-config-dq2l7" Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.419292 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc8hm-config-dq2l7" event={"ID":"14fd5dfa-3946-4d41-afc9-e2c8fbb44570","Type":"ContainerDied","Data":"c6bcb13e266bb6a0eaabeb7890676cfdb76f95b3453007237c72d00713280a96"} Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.420301 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6bcb13e266bb6a0eaabeb7890676cfdb76f95b3453007237c72d00713280a96" Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.421996 4718 generic.go:334] "Generic (PLEG): container finished" podID="5b125dda-4aa9-4a44-a714-f553fa853648" containerID="ba52de14a3c4fc9fae39b20b264b56c24f4a231ef014b4ac53d915c9ee32f870" exitCode=0 Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.422090 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rbg79" event={"ID":"5b125dda-4aa9-4a44-a714-f553fa853648","Type":"ContainerDied","Data":"ba52de14a3c4fc9fae39b20b264b56c24f4a231ef014b4ac53d915c9ee32f870"} Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.422125 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rbg79" event={"ID":"5b125dda-4aa9-4a44-a714-f553fa853648","Type":"ContainerStarted","Data":"a027960ee869c249e3a71949e1a65b413d97b0ab49a8b1271cc4229810165ecf"} Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.426315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fcb-account-create-w9xds" event={"ID":"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9","Type":"ContainerStarted","Data":"72a6c3e7829fea07808719d8fc1f99e25ddad095b707bcb17dbcea988a53d3a6"} Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.426372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fcb-account-create-w9xds" event={"ID":"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9","Type":"ContainerStarted","Data":"fda7be15d00a4bdc2e7617e716dc8516aabce5335273b1ea3b1261518745783f"} Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.456787 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7fcb-account-create-w9xds" podStartSLOduration=1.456755549 podStartE2EDuration="1.456755549s" podCreationTimestamp="2025-11-23 15:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:07.45237626 +0000 UTC m=+918.691996114" watchObservedRunningTime="2025-11-23 15:01:07.456755549 +0000 UTC m=+918.696375393" Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.598104 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dc8hm" Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.882428 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dc8hm-config-dq2l7"] Nov 23 15:01:07 crc kubenswrapper[4718]: I1123 15:01:07.887802 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dc8hm-config-dq2l7"] Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.440075 4718 generic.go:334] "Generic (PLEG): container finished" podID="9bfa5922-5bb7-4b2c-a5ca-7349e85289a9" containerID="72a6c3e7829fea07808719d8fc1f99e25ddad095b707bcb17dbcea988a53d3a6" exitCode=0 Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.452987 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fd5dfa-3946-4d41-afc9-e2c8fbb44570" path="/var/lib/kubelet/pods/14fd5dfa-3946-4d41-afc9-e2c8fbb44570/volumes" Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.453861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fcb-account-create-w9xds" event={"ID":"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9","Type":"ContainerDied","Data":"72a6c3e7829fea07808719d8fc1f99e25ddad095b707bcb17dbcea988a53d3a6"} Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.769897 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rbg79" Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.813378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125dda-4aa9-4a44-a714-f553fa853648-operator-scripts\") pod \"5b125dda-4aa9-4a44-a714-f553fa853648\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.813500 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fx5v\" (UniqueName: \"kubernetes.io/projected/5b125dda-4aa9-4a44-a714-f553fa853648-kube-api-access-8fx5v\") pod \"5b125dda-4aa9-4a44-a714-f553fa853648\" (UID: \"5b125dda-4aa9-4a44-a714-f553fa853648\") " Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.814025 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b125dda-4aa9-4a44-a714-f553fa853648-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b125dda-4aa9-4a44-a714-f553fa853648" (UID: "5b125dda-4aa9-4a44-a714-f553fa853648"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.838940 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b125dda-4aa9-4a44-a714-f553fa853648-kube-api-access-8fx5v" (OuterVolumeSpecName: "kube-api-access-8fx5v") pod "5b125dda-4aa9-4a44-a714-f553fa853648" (UID: "5b125dda-4aa9-4a44-a714-f553fa853648"). InnerVolumeSpecName "kube-api-access-8fx5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.915201 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125dda-4aa9-4a44-a714-f553fa853648-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:08 crc kubenswrapper[4718]: I1123 15:01:08.915239 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fx5v\" (UniqueName: \"kubernetes.io/projected/5b125dda-4aa9-4a44-a714-f553fa853648-kube-api-access-8fx5v\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.450546 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rbg79" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.450545 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rbg79" event={"ID":"5b125dda-4aa9-4a44-a714-f553fa853648","Type":"ContainerDied","Data":"a027960ee869c249e3a71949e1a65b413d97b0ab49a8b1271cc4229810165ecf"} Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.450954 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a027960ee869c249e3a71949e1a65b413d97b0ab49a8b1271cc4229810165ecf" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.699001 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.770645 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.827847 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7zmr\" (UniqueName: \"kubernetes.io/projected/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-kube-api-access-q7zmr\") pod \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.827902 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-operator-scripts\") pod \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\" (UID: \"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9\") " Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.829773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bfa5922-5bb7-4b2c-a5ca-7349e85289a9" (UID: "9bfa5922-5bb7-4b2c-a5ca-7349e85289a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.833695 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-kube-api-access-q7zmr" (OuterVolumeSpecName: "kube-api-access-q7zmr") pod "9bfa5922-5bb7-4b2c-a5ca-7349e85289a9" (UID: "9bfa5922-5bb7-4b2c-a5ca-7349e85289a9"). InnerVolumeSpecName "kube-api-access-q7zmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.930258 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7zmr\" (UniqueName: \"kubernetes.io/projected/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-kube-api-access-q7zmr\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:09 crc kubenswrapper[4718]: I1123 15:01:09.930299 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.458211 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fcb-account-create-w9xds" event={"ID":"9bfa5922-5bb7-4b2c-a5ca-7349e85289a9","Type":"ContainerDied","Data":"fda7be15d00a4bdc2e7617e716dc8516aabce5335273b1ea3b1261518745783f"} Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.458470 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda7be15d00a4bdc2e7617e716dc8516aabce5335273b1ea3b1261518745783f" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.458300 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fcb-account-create-w9xds" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.660660 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8qshv"] Nov 23 15:01:10 crc kubenswrapper[4718]: E1123 15:01:10.661054 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfa5922-5bb7-4b2c-a5ca-7349e85289a9" containerName="mariadb-account-create" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.661076 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfa5922-5bb7-4b2c-a5ca-7349e85289a9" containerName="mariadb-account-create" Nov 23 15:01:10 crc kubenswrapper[4718]: E1123 15:01:10.661103 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fd5dfa-3946-4d41-afc9-e2c8fbb44570" containerName="ovn-config" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.661127 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fd5dfa-3946-4d41-afc9-e2c8fbb44570" containerName="ovn-config" Nov 23 15:01:10 crc kubenswrapper[4718]: E1123 15:01:10.661151 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b125dda-4aa9-4a44-a714-f553fa853648" containerName="mariadb-database-create" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.661160 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b125dda-4aa9-4a44-a714-f553fa853648" containerName="mariadb-database-create" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.661519 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b125dda-4aa9-4a44-a714-f553fa853648" containerName="mariadb-database-create" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.661543 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fd5dfa-3946-4d41-afc9-e2c8fbb44570" containerName="ovn-config" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.661557 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bfa5922-5bb7-4b2c-a5ca-7349e85289a9" containerName="mariadb-account-create" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.662155 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.703743 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8qshv"] Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.744888 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54124fda-4239-42e9-b86f-2cfa15af47f0-operator-scripts\") pod \"keystone-db-create-8qshv\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.745250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849q8\" (UniqueName: \"kubernetes.io/projected/54124fda-4239-42e9-b86f-2cfa15af47f0-kube-api-access-849q8\") pod \"keystone-db-create-8qshv\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.847203 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54124fda-4239-42e9-b86f-2cfa15af47f0-operator-scripts\") pod \"keystone-db-create-8qshv\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.847304 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-849q8\" (UniqueName: \"kubernetes.io/projected/54124fda-4239-42e9-b86f-2cfa15af47f0-kube-api-access-849q8\") pod \"keystone-db-create-8qshv\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.848269 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54124fda-4239-42e9-b86f-2cfa15af47f0-operator-scripts\") pod \"keystone-db-create-8qshv\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.865734 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-849q8\" (UniqueName: \"kubernetes.io/projected/54124fda-4239-42e9-b86f-2cfa15af47f0-kube-api-access-849q8\") pod \"keystone-db-create-8qshv\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:10 crc kubenswrapper[4718]: I1123 15:01:10.987088 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.387100 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-sz47m"] Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.388822 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.396919 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.400160 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nr9gc" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.403213 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sz47m"] Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.456332 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-combined-ca-bundle\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.456399 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdh8k\" (UniqueName: \"kubernetes.io/projected/1935d5b8-9171-4708-9b50-4dbef79d106b-kube-api-access-pdh8k\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.456623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-config-data\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.456726 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-db-sync-config-data\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.482125 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8qshv"] Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.558030 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-combined-ca-bundle\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.558070 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdh8k\" (UniqueName: \"kubernetes.io/projected/1935d5b8-9171-4708-9b50-4dbef79d106b-kube-api-access-pdh8k\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.558107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-config-data\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.558129 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-db-sync-config-data\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.563826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-db-sync-config-data\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.564084 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-combined-ca-bundle\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.564109 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-config-data\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.573521 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdh8k\" (UniqueName: \"kubernetes.io/projected/1935d5b8-9171-4708-9b50-4dbef79d106b-kube-api-access-pdh8k\") pod \"glance-db-sync-sz47m\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:11 crc kubenswrapper[4718]: I1123 15:01:11.709771 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:12 crc kubenswrapper[4718]: I1123 15:01:12.254213 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sz47m"] Nov 23 15:01:12 crc kubenswrapper[4718]: I1123 15:01:12.473751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sz47m" event={"ID":"1935d5b8-9171-4708-9b50-4dbef79d106b","Type":"ContainerStarted","Data":"ed616a9101b439d37e35ef56ed8036c3d33eb713e787e8eb1e8b7b86e836cd48"} Nov 23 15:01:12 crc kubenswrapper[4718]: I1123 15:01:12.475663 4718 generic.go:334] "Generic (PLEG): container finished" podID="54124fda-4239-42e9-b86f-2cfa15af47f0" containerID="6ac5ec00a43f3ddbbe1b3ddb068af81fe9dd29ee5b4516aadb9b269da681d291" exitCode=0 Nov 23 15:01:12 crc kubenswrapper[4718]: I1123 15:01:12.475723 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8qshv" event={"ID":"54124fda-4239-42e9-b86f-2cfa15af47f0","Type":"ContainerDied","Data":"6ac5ec00a43f3ddbbe1b3ddb068af81fe9dd29ee5b4516aadb9b269da681d291"} Nov 23 15:01:12 crc kubenswrapper[4718]: I1123 15:01:12.475776 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8qshv" event={"ID":"54124fda-4239-42e9-b86f-2cfa15af47f0","Type":"ContainerStarted","Data":"8b10e56d7e0d936e45caed93786befb3a195b6c7179ce06110540c903aa1795c"} Nov 23 15:01:13 crc kubenswrapper[4718]: I1123 15:01:13.818828 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:13 crc kubenswrapper[4718]: I1123 15:01:13.900534 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-849q8\" (UniqueName: \"kubernetes.io/projected/54124fda-4239-42e9-b86f-2cfa15af47f0-kube-api-access-849q8\") pod \"54124fda-4239-42e9-b86f-2cfa15af47f0\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " Nov 23 15:01:13 crc kubenswrapper[4718]: I1123 15:01:13.900612 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54124fda-4239-42e9-b86f-2cfa15af47f0-operator-scripts\") pod \"54124fda-4239-42e9-b86f-2cfa15af47f0\" (UID: \"54124fda-4239-42e9-b86f-2cfa15af47f0\") " Nov 23 15:01:13 crc kubenswrapper[4718]: I1123 15:01:13.901366 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54124fda-4239-42e9-b86f-2cfa15af47f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54124fda-4239-42e9-b86f-2cfa15af47f0" (UID: "54124fda-4239-42e9-b86f-2cfa15af47f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:13 crc kubenswrapper[4718]: I1123 15:01:13.907783 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54124fda-4239-42e9-b86f-2cfa15af47f0-kube-api-access-849q8" (OuterVolumeSpecName: "kube-api-access-849q8") pod "54124fda-4239-42e9-b86f-2cfa15af47f0" (UID: "54124fda-4239-42e9-b86f-2cfa15af47f0"). InnerVolumeSpecName "kube-api-access-849q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:14 crc kubenswrapper[4718]: I1123 15:01:14.003014 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-849q8\" (UniqueName: \"kubernetes.io/projected/54124fda-4239-42e9-b86f-2cfa15af47f0-kube-api-access-849q8\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:14 crc kubenswrapper[4718]: I1123 15:01:14.003094 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54124fda-4239-42e9-b86f-2cfa15af47f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:14 crc kubenswrapper[4718]: I1123 15:01:14.501496 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8qshv" event={"ID":"54124fda-4239-42e9-b86f-2cfa15af47f0","Type":"ContainerDied","Data":"8b10e56d7e0d936e45caed93786befb3a195b6c7179ce06110540c903aa1795c"} Nov 23 15:01:14 crc kubenswrapper[4718]: I1123 15:01:14.501556 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b10e56d7e0d936e45caed93786befb3a195b6c7179ce06110540c903aa1795c" Nov 23 15:01:14 crc kubenswrapper[4718]: I1123 15:01:14.501636 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8qshv" Nov 23 15:01:16 crc kubenswrapper[4718]: I1123 15:01:16.355279 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:01:16 crc kubenswrapper[4718]: I1123 15:01:16.362547 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef94753b-867a-4e46-9ff8-66178f25efaa-etc-swift\") pod \"swift-storage-0\" (UID: \"ef94753b-867a-4e46-9ff8-66178f25efaa\") " pod="openstack/swift-storage-0" Nov 23 15:01:16 crc kubenswrapper[4718]: I1123 15:01:16.619466 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.428717 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.674661 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.786413 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zjsxx"] Nov 23 15:01:17 crc kubenswrapper[4718]: E1123 15:01:17.786786 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54124fda-4239-42e9-b86f-2cfa15af47f0" containerName="mariadb-database-create" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.786805 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="54124fda-4239-42e9-b86f-2cfa15af47f0" containerName="mariadb-database-create" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.786962 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="54124fda-4239-42e9-b86f-2cfa15af47f0" containerName="mariadb-database-create" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.787463 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.812173 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zjsxx"] Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.881616 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j466b\" (UniqueName: \"kubernetes.io/projected/cc37da05-93d6-403b-b320-c2445c7880cd-kube-api-access-j466b\") pod \"cinder-db-create-zjsxx\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.881750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc37da05-93d6-403b-b320-c2445c7880cd-operator-scripts\") pod \"cinder-db-create-zjsxx\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.899843 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-656vg"] Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.900923 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-656vg" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.932616 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c821-account-create-hkfkc"] Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.934955 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.941772 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.963920 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-656vg"] Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.987455 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc37da05-93d6-403b-b320-c2445c7880cd-operator-scripts\") pod \"cinder-db-create-zjsxx\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.987717 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64934b7-7c67-49de-9f77-ac5c6873a04b-operator-scripts\") pod \"cinder-c821-account-create-hkfkc\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.987859 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-operator-scripts\") pod \"barbican-db-create-656vg\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " pod="openstack/barbican-db-create-656vg" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.987939 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46jx\" (UniqueName: \"kubernetes.io/projected/f64934b7-7c67-49de-9f77-ac5c6873a04b-kube-api-access-c46jx\") pod \"cinder-c821-account-create-hkfkc\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.987995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j466b\" (UniqueName: \"kubernetes.io/projected/cc37da05-93d6-403b-b320-c2445c7880cd-kube-api-access-j466b\") pod \"cinder-db-create-zjsxx\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.988026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvpnj\" (UniqueName: \"kubernetes.io/projected/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-kube-api-access-dvpnj\") pod \"barbican-db-create-656vg\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " pod="openstack/barbican-db-create-656vg" Nov 23 15:01:17 crc kubenswrapper[4718]: I1123 15:01:17.988728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc37da05-93d6-403b-b320-c2445c7880cd-operator-scripts\") pod \"cinder-db-create-zjsxx\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.004688 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c821-account-create-hkfkc"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.008360 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j466b\" (UniqueName: \"kubernetes.io/projected/cc37da05-93d6-403b-b320-c2445c7880cd-kube-api-access-j466b\") pod \"cinder-db-create-zjsxx\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.030713 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e4cc-account-create-wbqq7"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.032059 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.034246 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.041712 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e4cc-account-create-wbqq7"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.087219 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9lfvb"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.088381 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lfvb"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.088567 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.090409 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-operator-scripts\") pod \"barbican-db-create-656vg\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " pod="openstack/barbican-db-create-656vg" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.090480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46jx\" (UniqueName: \"kubernetes.io/projected/f64934b7-7c67-49de-9f77-ac5c6873a04b-kube-api-access-c46jx\") pod \"cinder-c821-account-create-hkfkc\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.090523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7718d50a-4a43-4c73-90ab-b7019cc29fb3-operator-scripts\") pod \"barbican-e4cc-account-create-wbqq7\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.090539 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvpnj\" (UniqueName: \"kubernetes.io/projected/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-kube-api-access-dvpnj\") pod \"barbican-db-create-656vg\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " pod="openstack/barbican-db-create-656vg" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.090589 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64934b7-7c67-49de-9f77-ac5c6873a04b-operator-scripts\") pod \"cinder-c821-account-create-hkfkc\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.090624 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg69v\" (UniqueName: \"kubernetes.io/projected/7718d50a-4a43-4c73-90ab-b7019cc29fb3-kube-api-access-hg69v\") pod \"barbican-e4cc-account-create-wbqq7\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.091239 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-operator-scripts\") pod \"barbican-db-create-656vg\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " pod="openstack/barbican-db-create-656vg" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.091977 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64934b7-7c67-49de-9f77-ac5c6873a04b-operator-scripts\") pod \"cinder-c821-account-create-hkfkc\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.104902 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xknxt" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.105174 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.105111 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.105466 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.115814 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46jx\" (UniqueName: \"kubernetes.io/projected/f64934b7-7c67-49de-9f77-ac5c6873a04b-kube-api-access-c46jx\") pod \"cinder-c821-account-create-hkfkc\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.125088 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.128041 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvpnj\" (UniqueName: \"kubernetes.io/projected/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-kube-api-access-dvpnj\") pod \"barbican-db-create-656vg\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " pod="openstack/barbican-db-create-656vg" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.194128 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-combined-ca-bundle\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.194229 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7718d50a-4a43-4c73-90ab-b7019cc29fb3-operator-scripts\") pod \"barbican-e4cc-account-create-wbqq7\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.194269 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pvt\" (UniqueName: \"kubernetes.io/projected/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-kube-api-access-b7pvt\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.194301 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-config-data\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.194370 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg69v\" (UniqueName: \"kubernetes.io/projected/7718d50a-4a43-4c73-90ab-b7019cc29fb3-kube-api-access-hg69v\") pod \"barbican-e4cc-account-create-wbqq7\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.195264 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7718d50a-4a43-4c73-90ab-b7019cc29fb3-operator-scripts\") pod \"barbican-e4cc-account-create-wbqq7\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.208504 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-wwnbd"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.209629 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.216992 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg69v\" (UniqueName: \"kubernetes.io/projected/7718d50a-4a43-4c73-90ab-b7019cc29fb3-kube-api-access-hg69v\") pod \"barbican-e4cc-account-create-wbqq7\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.234498 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wwnbd"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.234849 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-656vg" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.271141 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.295683 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-combined-ca-bundle\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.295768 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b92ece-807b-4c8b-9c77-906c5e70804c-operator-scripts\") pod \"neutron-db-create-wwnbd\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.295820 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7pvt\" (UniqueName: \"kubernetes.io/projected/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-kube-api-access-b7pvt\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.295851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-config-data\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.295877 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/a4b92ece-807b-4c8b-9c77-906c5e70804c-kube-api-access-kj6lt\") pod \"neutron-db-create-wwnbd\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.302252 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-config-data\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.312705 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7pvt\" (UniqueName: \"kubernetes.io/projected/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-kube-api-access-b7pvt\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.319226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-combined-ca-bundle\") pod \"keystone-db-sync-9lfvb\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.360824 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.378385 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c053-account-create-pnjvh"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.379323 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.383169 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.395470 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c053-account-create-pnjvh"] Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.397830 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b92ece-807b-4c8b-9c77-906c5e70804c-operator-scripts\") pod \"neutron-db-create-wwnbd\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.397888 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/a4b92ece-807b-4c8b-9c77-906c5e70804c-kube-api-access-kj6lt\") pod \"neutron-db-create-wwnbd\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.398583 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b92ece-807b-4c8b-9c77-906c5e70804c-operator-scripts\") pod \"neutron-db-create-wwnbd\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.414938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/a4b92ece-807b-4c8b-9c77-906c5e70804c-kube-api-access-kj6lt\") pod \"neutron-db-create-wwnbd\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.483647 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.500094 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719445f9-11cc-4f25-bf8d-bdcb03e1b443-operator-scripts\") pod \"neutron-c053-account-create-pnjvh\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.501135 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqql\" (UniqueName: \"kubernetes.io/projected/719445f9-11cc-4f25-bf8d-bdcb03e1b443-kube-api-access-8hqql\") pod \"neutron-c053-account-create-pnjvh\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.581650 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.603617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719445f9-11cc-4f25-bf8d-bdcb03e1b443-operator-scripts\") pod \"neutron-c053-account-create-pnjvh\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.603706 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqql\" (UniqueName: \"kubernetes.io/projected/719445f9-11cc-4f25-bf8d-bdcb03e1b443-kube-api-access-8hqql\") pod \"neutron-c053-account-create-pnjvh\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.604520 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719445f9-11cc-4f25-bf8d-bdcb03e1b443-operator-scripts\") pod \"neutron-c053-account-create-pnjvh\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.623850 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqql\" (UniqueName: \"kubernetes.io/projected/719445f9-11cc-4f25-bf8d-bdcb03e1b443-kube-api-access-8hqql\") pod \"neutron-c053-account-create-pnjvh\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:18 crc kubenswrapper[4718]: I1123 15:01:18.698947 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.053295 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.053889 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.054262 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.056926 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99f9fac909b59d4f75ad1a92599badf9d1f5ce639b6a71381d289bfc1cc670ef"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.057013 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://99f9fac909b59d4f75ad1a92599badf9d1f5ce639b6a71381d289bfc1cc670ef" gracePeriod=600 Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.585647 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="99f9fac909b59d4f75ad1a92599badf9d1f5ce639b6a71381d289bfc1cc670ef" exitCode=0 Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.585689 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"99f9fac909b59d4f75ad1a92599badf9d1f5ce639b6a71381d289bfc1cc670ef"} Nov 23 15:01:23 crc kubenswrapper[4718]: I1123 15:01:23.585723 4718 scope.go:117] "RemoveContainer" containerID="e672c46344261e8af36921b484538b589267121731870b982143bf0d2e76b2f1" Nov 23 15:01:24 crc kubenswrapper[4718]: I1123 15:01:24.981572 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c821-account-create-hkfkc"] Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.007250 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-656vg"] Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.022927 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e4cc-account-create-wbqq7"] Nov 23 15:01:25 crc kubenswrapper[4718]: W1123 15:01:25.025077 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7718d50a_4a43_4c73_90ab_b7019cc29fb3.slice/crio-e7721c66fefe36c1475690731ef5c512d9b3b6fb55975b8a02e3d4ae2e1522dd WatchSource:0}: Error finding container e7721c66fefe36c1475690731ef5c512d9b3b6fb55975b8a02e3d4ae2e1522dd: Status 404 returned error can't find the container with id e7721c66fefe36c1475690731ef5c512d9b3b6fb55975b8a02e3d4ae2e1522dd Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.035056 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zjsxx"] Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.137899 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c053-account-create-pnjvh"] Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.145059 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lfvb"] Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.150801 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wwnbd"] Nov 23 15:01:25 crc kubenswrapper[4718]: W1123 15:01:25.184820 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719445f9_11cc_4f25_bf8d_bdcb03e1b443.slice/crio-950eaa3c03d33d31cf59528627468b68aab0250ab41da2ce03777f249d055ca2 WatchSource:0}: Error finding container 950eaa3c03d33d31cf59528627468b68aab0250ab41da2ce03777f249d055ca2: Status 404 returned error can't find the container with id 950eaa3c03d33d31cf59528627468b68aab0250ab41da2ce03777f249d055ca2 Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.227609 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 15:01:25 crc kubenswrapper[4718]: W1123 15:01:25.244772 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef94753b_867a_4e46_9ff8_66178f25efaa.slice/crio-b9a4c8ab32d2b145608e37d1f40ccbcca875747cb1fcfb936f9ea9293130adf5 WatchSource:0}: Error finding container b9a4c8ab32d2b145608e37d1f40ccbcca875747cb1fcfb936f9ea9293130adf5: Status 404 returned error can't find the container with id b9a4c8ab32d2b145608e37d1f40ccbcca875747cb1fcfb936f9ea9293130adf5 Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.648074 4718 generic.go:334] "Generic (PLEG): container finished" podID="f64934b7-7c67-49de-9f77-ac5c6873a04b" containerID="5a93acbcba642355db3ab8b2c13bc22652ed53e3d87b23d524f8aa13984836b8" exitCode=0 Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.648129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c821-account-create-hkfkc" event={"ID":"f64934b7-7c67-49de-9f77-ac5c6873a04b","Type":"ContainerDied","Data":"5a93acbcba642355db3ab8b2c13bc22652ed53e3d87b23d524f8aa13984836b8"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.648364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c821-account-create-hkfkc" event={"ID":"f64934b7-7c67-49de-9f77-ac5c6873a04b","Type":"ContainerStarted","Data":"e09eb5a1ecc800b67f99beac28f1d2bd46a04455a4eb7f1137beb5c541bdf704"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.651816 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sz47m" event={"ID":"1935d5b8-9171-4708-9b50-4dbef79d106b","Type":"ContainerStarted","Data":"3d239e018f6426fde1107bd93f66bb487c96cb841c0024b1e617c5728cf62874"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.656553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c053-account-create-pnjvh" event={"ID":"719445f9-11cc-4f25-bf8d-bdcb03e1b443","Type":"ContainerStarted","Data":"ae0ea662ea0dc1808e8aa0fec73cd76701c79218067748699bfde5524cdbd907"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.656594 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c053-account-create-pnjvh" event={"ID":"719445f9-11cc-4f25-bf8d-bdcb03e1b443","Type":"ContainerStarted","Data":"950eaa3c03d33d31cf59528627468b68aab0250ab41da2ce03777f249d055ca2"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.657420 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"b9a4c8ab32d2b145608e37d1f40ccbcca875747cb1fcfb936f9ea9293130adf5"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.658731 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wwnbd" event={"ID":"a4b92ece-807b-4c8b-9c77-906c5e70804c","Type":"ContainerStarted","Data":"765a9eb9db5def8300090a514d22a6b5489f8dddefa5ba36a31e0fc9320106b3"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.658775 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wwnbd" event={"ID":"a4b92ece-807b-4c8b-9c77-906c5e70804c","Type":"ContainerStarted","Data":"ca460bed4f3d97e85a6a9b89208bc78d23767fe9bbb3213cf845091632208bb8"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.659828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfvb" event={"ID":"60bc235d-395c-4e4b-be2a-26e2f7e7bf78","Type":"ContainerStarted","Data":"8ee805ee289f746fc657b90d71bff6137f933fe72bbff90de2219561447df4c7"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.663905 4718 generic.go:334] "Generic (PLEG): container finished" podID="7718d50a-4a43-4c73-90ab-b7019cc29fb3" containerID="039160df563beaf094627e6deadf18d2f33f46a614921a86711a30592765f3f3" exitCode=0 Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.663965 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e4cc-account-create-wbqq7" event={"ID":"7718d50a-4a43-4c73-90ab-b7019cc29fb3","Type":"ContainerDied","Data":"039160df563beaf094627e6deadf18d2f33f46a614921a86711a30592765f3f3"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.663989 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e4cc-account-create-wbqq7" event={"ID":"7718d50a-4a43-4c73-90ab-b7019cc29fb3","Type":"ContainerStarted","Data":"e7721c66fefe36c1475690731ef5c512d9b3b6fb55975b8a02e3d4ae2e1522dd"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.665069 4718 generic.go:334] "Generic (PLEG): container finished" podID="cc37da05-93d6-403b-b320-c2445c7880cd" containerID="99d685242e28f1b9c704b5364be1c9f9dae2fa6b75d9a23e1e3db3290ee373b6" exitCode=0 Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.665104 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zjsxx" event={"ID":"cc37da05-93d6-403b-b320-c2445c7880cd","Type":"ContainerDied","Data":"99d685242e28f1b9c704b5364be1c9f9dae2fa6b75d9a23e1e3db3290ee373b6"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.665118 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zjsxx" event={"ID":"cc37da05-93d6-403b-b320-c2445c7880cd","Type":"ContainerStarted","Data":"afc04df3970399afa86ed133105a668d6403d779e84963e83f891faf758146f1"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.666728 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"71577aa824008968487c33bc21787ce3eb07e3b6f70d6cf5aac37a6881128f0a"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.668748 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-656vg" event={"ID":"a582e7aa-1025-4a28-9dc8-18fbb5d1d857","Type":"ContainerStarted","Data":"f1c4531eabee56929904e35aa51e956c96ed3d32c9203f598b06e943c4093bdd"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.668774 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-656vg" event={"ID":"a582e7aa-1025-4a28-9dc8-18fbb5d1d857","Type":"ContainerStarted","Data":"1150d58d645dc3bd24adec1c953b8b9c10707d1234d3de0c7871222265becb4d"} Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.675603 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-wwnbd" podStartSLOduration=7.675585497 podStartE2EDuration="7.675585497s" podCreationTimestamp="2025-11-23 15:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:25.674543039 +0000 UTC m=+936.914162893" watchObservedRunningTime="2025-11-23 15:01:25.675585497 +0000 UTC m=+936.915205341" Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.700163 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-sz47m" podStartSLOduration=2.437868047 podStartE2EDuration="14.700141056s" podCreationTimestamp="2025-11-23 15:01:11 +0000 UTC" firstStartedPulling="2025-11-23 15:01:12.259597697 +0000 UTC m=+923.499217561" lastFinishedPulling="2025-11-23 15:01:24.521870726 +0000 UTC m=+935.761490570" observedRunningTime="2025-11-23 15:01:25.689806934 +0000 UTC m=+936.929426798" watchObservedRunningTime="2025-11-23 15:01:25.700141056 +0000 UTC m=+936.939760920" Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.712181 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c053-account-create-pnjvh" podStartSLOduration=7.712123151 podStartE2EDuration="7.712123151s" podCreationTimestamp="2025-11-23 15:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:25.705350727 +0000 UTC m=+936.944970581" watchObservedRunningTime="2025-11-23 15:01:25.712123151 +0000 UTC m=+936.951743025" Nov 23 15:01:25 crc kubenswrapper[4718]: I1123 15:01:25.737089 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-656vg" podStartSLOduration=8.73707039 podStartE2EDuration="8.73707039s" podCreationTimestamp="2025-11-23 15:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:25.730992735 +0000 UTC m=+936.970612629" watchObservedRunningTime="2025-11-23 15:01:25.73707039 +0000 UTC m=+936.976690244" Nov 23 15:01:26 crc kubenswrapper[4718]: I1123 15:01:26.676923 4718 generic.go:334] "Generic (PLEG): container finished" podID="a582e7aa-1025-4a28-9dc8-18fbb5d1d857" containerID="f1c4531eabee56929904e35aa51e956c96ed3d32c9203f598b06e943c4093bdd" exitCode=0 Nov 23 15:01:26 crc kubenswrapper[4718]: I1123 15:01:26.677500 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-656vg" event={"ID":"a582e7aa-1025-4a28-9dc8-18fbb5d1d857","Type":"ContainerDied","Data":"f1c4531eabee56929904e35aa51e956c96ed3d32c9203f598b06e943c4093bdd"} Nov 23 15:01:26 crc kubenswrapper[4718]: I1123 15:01:26.679651 4718 generic.go:334] "Generic (PLEG): container finished" podID="719445f9-11cc-4f25-bf8d-bdcb03e1b443" containerID="ae0ea662ea0dc1808e8aa0fec73cd76701c79218067748699bfde5524cdbd907" exitCode=0 Nov 23 15:01:26 crc kubenswrapper[4718]: I1123 15:01:26.679691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c053-account-create-pnjvh" event={"ID":"719445f9-11cc-4f25-bf8d-bdcb03e1b443","Type":"ContainerDied","Data":"ae0ea662ea0dc1808e8aa0fec73cd76701c79218067748699bfde5524cdbd907"} Nov 23 15:01:26 crc kubenswrapper[4718]: I1123 15:01:26.681264 4718 generic.go:334] "Generic (PLEG): container finished" podID="a4b92ece-807b-4c8b-9c77-906c5e70804c" containerID="765a9eb9db5def8300090a514d22a6b5489f8dddefa5ba36a31e0fc9320106b3" exitCode=0 Nov 23 15:01:26 crc kubenswrapper[4718]: I1123 15:01:26.681471 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wwnbd" event={"ID":"a4b92ece-807b-4c8b-9c77-906c5e70804c","Type":"ContainerDied","Data":"765a9eb9db5def8300090a514d22a6b5489f8dddefa5ba36a31e0fc9320106b3"} Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.254673 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.262486 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.268573 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.386827 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg69v\" (UniqueName: \"kubernetes.io/projected/7718d50a-4a43-4c73-90ab-b7019cc29fb3-kube-api-access-hg69v\") pod \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.386897 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc37da05-93d6-403b-b320-c2445c7880cd-operator-scripts\") pod \"cc37da05-93d6-403b-b320-c2445c7880cd\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.386959 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c46jx\" (UniqueName: \"kubernetes.io/projected/f64934b7-7c67-49de-9f77-ac5c6873a04b-kube-api-access-c46jx\") pod \"f64934b7-7c67-49de-9f77-ac5c6873a04b\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.387009 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64934b7-7c67-49de-9f77-ac5c6873a04b-operator-scripts\") pod \"f64934b7-7c67-49de-9f77-ac5c6873a04b\" (UID: \"f64934b7-7c67-49de-9f77-ac5c6873a04b\") " Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.387065 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j466b\" (UniqueName: \"kubernetes.io/projected/cc37da05-93d6-403b-b320-c2445c7880cd-kube-api-access-j466b\") pod \"cc37da05-93d6-403b-b320-c2445c7880cd\" (UID: \"cc37da05-93d6-403b-b320-c2445c7880cd\") " Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.387128 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7718d50a-4a43-4c73-90ab-b7019cc29fb3-operator-scripts\") pod \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\" (UID: \"7718d50a-4a43-4c73-90ab-b7019cc29fb3\") " Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.387925 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7718d50a-4a43-4c73-90ab-b7019cc29fb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7718d50a-4a43-4c73-90ab-b7019cc29fb3" (UID: "7718d50a-4a43-4c73-90ab-b7019cc29fb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.388381 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc37da05-93d6-403b-b320-c2445c7880cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc37da05-93d6-403b-b320-c2445c7880cd" (UID: "cc37da05-93d6-403b-b320-c2445c7880cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.388611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64934b7-7c67-49de-9f77-ac5c6873a04b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f64934b7-7c67-49de-9f77-ac5c6873a04b" (UID: "f64934b7-7c67-49de-9f77-ac5c6873a04b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.393001 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc37da05-93d6-403b-b320-c2445c7880cd-kube-api-access-j466b" (OuterVolumeSpecName: "kube-api-access-j466b") pod "cc37da05-93d6-403b-b320-c2445c7880cd" (UID: "cc37da05-93d6-403b-b320-c2445c7880cd"). InnerVolumeSpecName "kube-api-access-j466b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.393089 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7718d50a-4a43-4c73-90ab-b7019cc29fb3-kube-api-access-hg69v" (OuterVolumeSpecName: "kube-api-access-hg69v") pod "7718d50a-4a43-4c73-90ab-b7019cc29fb3" (UID: "7718d50a-4a43-4c73-90ab-b7019cc29fb3"). InnerVolumeSpecName "kube-api-access-hg69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.393110 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64934b7-7c67-49de-9f77-ac5c6873a04b-kube-api-access-c46jx" (OuterVolumeSpecName: "kube-api-access-c46jx") pod "f64934b7-7c67-49de-9f77-ac5c6873a04b" (UID: "f64934b7-7c67-49de-9f77-ac5c6873a04b"). InnerVolumeSpecName "kube-api-access-c46jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.488536 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c46jx\" (UniqueName: \"kubernetes.io/projected/f64934b7-7c67-49de-9f77-ac5c6873a04b-kube-api-access-c46jx\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.488567 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64934b7-7c67-49de-9f77-ac5c6873a04b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.488576 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j466b\" (UniqueName: \"kubernetes.io/projected/cc37da05-93d6-403b-b320-c2445c7880cd-kube-api-access-j466b\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.488584 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7718d50a-4a43-4c73-90ab-b7019cc29fb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.488592 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg69v\" (UniqueName: \"kubernetes.io/projected/7718d50a-4a43-4c73-90ab-b7019cc29fb3-kube-api-access-hg69v\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.488600 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc37da05-93d6-403b-b320-c2445c7880cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.693656 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e4cc-account-create-wbqq7" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.693653 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e4cc-account-create-wbqq7" event={"ID":"7718d50a-4a43-4c73-90ab-b7019cc29fb3","Type":"ContainerDied","Data":"e7721c66fefe36c1475690731ef5c512d9b3b6fb55975b8a02e3d4ae2e1522dd"} Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.693719 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7721c66fefe36c1475690731ef5c512d9b3b6fb55975b8a02e3d4ae2e1522dd" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.697828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zjsxx" event={"ID":"cc37da05-93d6-403b-b320-c2445c7880cd","Type":"ContainerDied","Data":"afc04df3970399afa86ed133105a668d6403d779e84963e83f891faf758146f1"} Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.697858 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zjsxx" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.697866 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc04df3970399afa86ed133105a668d6403d779e84963e83f891faf758146f1" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.701112 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c821-account-create-hkfkc" event={"ID":"f64934b7-7c67-49de-9f77-ac5c6873a04b","Type":"ContainerDied","Data":"e09eb5a1ecc800b67f99beac28f1d2bd46a04455a4eb7f1137beb5c541bdf704"} Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.701160 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09eb5a1ecc800b67f99beac28f1d2bd46a04455a4eb7f1137beb5c541bdf704" Nov 23 15:01:27 crc kubenswrapper[4718]: I1123 15:01:27.701214 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c821-account-create-hkfkc" Nov 23 15:01:29 crc kubenswrapper[4718]: I1123 15:01:29.992388 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:29 crc kubenswrapper[4718]: I1123 15:01:29.997398 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-656vg" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.009052 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.135512 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b92ece-807b-4c8b-9c77-906c5e70804c-operator-scripts\") pod \"a4b92ece-807b-4c8b-9c77-906c5e70804c\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.135706 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hqql\" (UniqueName: \"kubernetes.io/projected/719445f9-11cc-4f25-bf8d-bdcb03e1b443-kube-api-access-8hqql\") pod \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.136469 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b92ece-807b-4c8b-9c77-906c5e70804c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4b92ece-807b-4c8b-9c77-906c5e70804c" (UID: "a4b92ece-807b-4c8b-9c77-906c5e70804c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.136608 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/a4b92ece-807b-4c8b-9c77-906c5e70804c-kube-api-access-kj6lt\") pod \"a4b92ece-807b-4c8b-9c77-906c5e70804c\" (UID: \"a4b92ece-807b-4c8b-9c77-906c5e70804c\") " Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.136728 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719445f9-11cc-4f25-bf8d-bdcb03e1b443-operator-scripts\") pod \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\" (UID: \"719445f9-11cc-4f25-bf8d-bdcb03e1b443\") " Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.136897 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvpnj\" (UniqueName: \"kubernetes.io/projected/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-kube-api-access-dvpnj\") pod \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.137055 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-operator-scripts\") pod \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\" (UID: \"a582e7aa-1025-4a28-9dc8-18fbb5d1d857\") " Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.137387 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719445f9-11cc-4f25-bf8d-bdcb03e1b443-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "719445f9-11cc-4f25-bf8d-bdcb03e1b443" (UID: "719445f9-11cc-4f25-bf8d-bdcb03e1b443"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.137689 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a582e7aa-1025-4a28-9dc8-18fbb5d1d857" (UID: "a582e7aa-1025-4a28-9dc8-18fbb5d1d857"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.137936 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719445f9-11cc-4f25-bf8d-bdcb03e1b443-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.137963 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.137975 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b92ece-807b-4c8b-9c77-906c5e70804c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.141728 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719445f9-11cc-4f25-bf8d-bdcb03e1b443-kube-api-access-8hqql" (OuterVolumeSpecName: "kube-api-access-8hqql") pod "719445f9-11cc-4f25-bf8d-bdcb03e1b443" (UID: "719445f9-11cc-4f25-bf8d-bdcb03e1b443"). InnerVolumeSpecName "kube-api-access-8hqql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.141923 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b92ece-807b-4c8b-9c77-906c5e70804c-kube-api-access-kj6lt" (OuterVolumeSpecName: "kube-api-access-kj6lt") pod "a4b92ece-807b-4c8b-9c77-906c5e70804c" (UID: "a4b92ece-807b-4c8b-9c77-906c5e70804c"). InnerVolumeSpecName "kube-api-access-kj6lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.143078 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-kube-api-access-dvpnj" (OuterVolumeSpecName: "kube-api-access-dvpnj") pod "a582e7aa-1025-4a28-9dc8-18fbb5d1d857" (UID: "a582e7aa-1025-4a28-9dc8-18fbb5d1d857"). InnerVolumeSpecName "kube-api-access-dvpnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.240240 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hqql\" (UniqueName: \"kubernetes.io/projected/719445f9-11cc-4f25-bf8d-bdcb03e1b443-kube-api-access-8hqql\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.240294 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/a4b92ece-807b-4c8b-9c77-906c5e70804c-kube-api-access-kj6lt\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.240309 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvpnj\" (UniqueName: \"kubernetes.io/projected/a582e7aa-1025-4a28-9dc8-18fbb5d1d857-kube-api-access-dvpnj\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.727916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-656vg" event={"ID":"a582e7aa-1025-4a28-9dc8-18fbb5d1d857","Type":"ContainerDied","Data":"1150d58d645dc3bd24adec1c953b8b9c10707d1234d3de0c7871222265becb4d"} Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.728249 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1150d58d645dc3bd24adec1c953b8b9c10707d1234d3de0c7871222265becb4d" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.727940 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-656vg" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.729976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfvb" event={"ID":"60bc235d-395c-4e4b-be2a-26e2f7e7bf78","Type":"ContainerStarted","Data":"1cd1b3be77b6cf9f153c14c78030cccd0867fa5a7b0edcb5fc39c9551bd8fc78"} Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.732126 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c053-account-create-pnjvh" event={"ID":"719445f9-11cc-4f25-bf8d-bdcb03e1b443","Type":"ContainerDied","Data":"950eaa3c03d33d31cf59528627468b68aab0250ab41da2ce03777f249d055ca2"} Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.732170 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="950eaa3c03d33d31cf59528627468b68aab0250ab41da2ce03777f249d055ca2" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.732139 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c053-account-create-pnjvh" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.736920 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"d3e896429074313e5b1eb8a1ccaab19e3d05b714f269e4b97a051b6e53ef99dd"} Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.739261 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wwnbd" event={"ID":"a4b92ece-807b-4c8b-9c77-906c5e70804c","Type":"ContainerDied","Data":"ca460bed4f3d97e85a6a9b89208bc78d23767fe9bbb3213cf845091632208bb8"} Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.739301 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca460bed4f3d97e85a6a9b89208bc78d23767fe9bbb3213cf845091632208bb8" Nov 23 15:01:30 crc kubenswrapper[4718]: I1123 15:01:30.739401 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wwnbd" Nov 23 15:01:31 crc kubenswrapper[4718]: I1123 15:01:31.765939 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"79497980b71edcf63e33322bef592b1d7078f4a317c52e89ea433595b2651984"} Nov 23 15:01:31 crc kubenswrapper[4718]: I1123 15:01:31.766006 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"84e266f9b475505924f65bd3304060edbafca3cf8967152fb0588c4a1d71a3c2"} Nov 23 15:01:31 crc kubenswrapper[4718]: I1123 15:01:31.766028 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"91d18f45bca93ca6e96140fc6a2d1b5ad293f51e46e837a4068cb08abbf79bda"} Nov 23 15:01:31 crc kubenswrapper[4718]: I1123 15:01:31.810586 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9lfvb" podStartSLOduration=8.506335065 podStartE2EDuration="13.81056109s" podCreationTimestamp="2025-11-23 15:01:18 +0000 UTC" firstStartedPulling="2025-11-23 15:01:25.153068065 +0000 UTC m=+936.392687909" lastFinishedPulling="2025-11-23 15:01:30.45729408 +0000 UTC m=+941.696913934" observedRunningTime="2025-11-23 15:01:31.802000547 +0000 UTC m=+943.041620421" watchObservedRunningTime="2025-11-23 15:01:31.81056109 +0000 UTC m=+943.050180954" Nov 23 15:01:33 crc kubenswrapper[4718]: I1123 15:01:33.795431 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"00a2c0c42f0260d6d01df1a17223545f7f005357f9634663f26a453207ccaa92"} Nov 23 15:01:33 crc kubenswrapper[4718]: I1123 15:01:33.796259 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"7c5029b9a42f5a847120bd1d802ea16949a77bd5dfab32cde2c33fcc82ec4754"} Nov 23 15:01:34 crc kubenswrapper[4718]: I1123 15:01:34.809886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"387e031eed8ed70220d51620edac801a587d11653bcd33f960de7869d92c6a19"} Nov 23 15:01:34 crc kubenswrapper[4718]: I1123 15:01:34.810841 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"a8e21d2529e716654322a0ac9fc671258652a83930b5818670fb20dcd34df155"} Nov 23 15:01:36 crc kubenswrapper[4718]: I1123 15:01:36.837815 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"d6a3249b53a1cdd22ffdf7b1a3a8d72518f2cfb02a6027e19febf7a9c3ea24ab"} Nov 23 15:01:36 crc kubenswrapper[4718]: I1123 15:01:36.838964 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"c151c27da09c71edec9c7487fd06e974a832e314b66b7cb8fac07f184ce0aa2b"} Nov 23 15:01:37 crc kubenswrapper[4718]: I1123 15:01:37.856538 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"c8718aff748ff81237ff3055d84941d9b6011285239e6fba54e031dd93942f50"} Nov 23 15:01:37 crc kubenswrapper[4718]: I1123 15:01:37.857228 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"1a2770b5aa7b5f2e62451ecc27227eb8bafc7b2b7cadbe4ef632a992c8de9da1"} Nov 23 15:01:37 crc kubenswrapper[4718]: I1123 15:01:37.857252 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"7ea5cb5deb2ffe5d6301bf6babed6f9252b81f72c945ee8f2a8fd9bc2b68b174"} Nov 23 15:01:38 crc kubenswrapper[4718]: I1123 15:01:38.877729 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"b88b21bd52a0c76a631047a10514307f04b8cba5cbeff19ca75bf6d6b6c70595"} Nov 23 15:01:38 crc kubenswrapper[4718]: I1123 15:01:38.878028 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ef94753b-867a-4e46-9ff8-66178f25efaa","Type":"ContainerStarted","Data":"22febfeddf656f43e8ed4c3937eba98fbf7cb3e0f12d1fc07ec34adaed040fb2"} Nov 23 15:01:38 crc kubenswrapper[4718]: I1123 15:01:38.931392 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.064802964 podStartE2EDuration="55.931357107s" podCreationTimestamp="2025-11-23 15:00:43 +0000 UTC" firstStartedPulling="2025-11-23 15:01:25.248913351 +0000 UTC m=+936.488533195" lastFinishedPulling="2025-11-23 15:01:36.115467484 +0000 UTC m=+947.355087338" observedRunningTime="2025-11-23 15:01:38.919082453 +0000 UTC m=+950.158702297" watchObservedRunningTime="2025-11-23 15:01:38.931357107 +0000 UTC m=+950.170976991" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.191313 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-s65rw"] Nov 23 15:01:39 crc kubenswrapper[4718]: E1123 15:01:39.191774 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b92ece-807b-4c8b-9c77-906c5e70804c" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.191798 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b92ece-807b-4c8b-9c77-906c5e70804c" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: E1123 15:01:39.191825 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a582e7aa-1025-4a28-9dc8-18fbb5d1d857" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.191837 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a582e7aa-1025-4a28-9dc8-18fbb5d1d857" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: E1123 15:01:39.191853 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7718d50a-4a43-4c73-90ab-b7019cc29fb3" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.191862 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7718d50a-4a43-4c73-90ab-b7019cc29fb3" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: E1123 15:01:39.191888 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc37da05-93d6-403b-b320-c2445c7880cd" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.191898 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc37da05-93d6-403b-b320-c2445c7880cd" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: E1123 15:01:39.191917 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719445f9-11cc-4f25-bf8d-bdcb03e1b443" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.191929 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="719445f9-11cc-4f25-bf8d-bdcb03e1b443" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: E1123 15:01:39.191957 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64934b7-7c67-49de-9f77-ac5c6873a04b" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.191965 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64934b7-7c67-49de-9f77-ac5c6873a04b" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.192234 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b92ece-807b-4c8b-9c77-906c5e70804c" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.192268 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="719445f9-11cc-4f25-bf8d-bdcb03e1b443" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.192289 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64934b7-7c67-49de-9f77-ac5c6873a04b" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.192308 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc37da05-93d6-403b-b320-c2445c7880cd" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.192332 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a582e7aa-1025-4a28-9dc8-18fbb5d1d857" containerName="mariadb-database-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.192353 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7718d50a-4a43-4c73-90ab-b7019cc29fb3" containerName="mariadb-account-create" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.193489 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.195464 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.202025 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-s65rw"] Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.293854 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.293923 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.293977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.294009 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4l6h\" (UniqueName: \"kubernetes.io/projected/3a1117c5-9775-4947-a666-60458548504a-kube-api-access-h4l6h\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.294225 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.294332 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-config\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.395849 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.396168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.396302 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.396473 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4l6h\" (UniqueName: \"kubernetes.io/projected/3a1117c5-9775-4947-a666-60458548504a-kube-api-access-h4l6h\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.396640 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.396767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-config\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.397098 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.397129 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.397222 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.397291 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.398123 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-config\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.422463 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4l6h\" (UniqueName: \"kubernetes.io/projected/3a1117c5-9775-4947-a666-60458548504a-kube-api-access-h4l6h\") pod \"dnsmasq-dns-764c5664d7-s65rw\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.514085 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:39 crc kubenswrapper[4718]: I1123 15:01:39.969198 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-s65rw"] Nov 23 15:01:39 crc kubenswrapper[4718]: W1123 15:01:39.976380 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a1117c5_9775_4947_a666_60458548504a.slice/crio-f1bbd42763d5dbde124f77430f0887c793fd90204a0eada3dae6e88e8f7ae43f WatchSource:0}: Error finding container f1bbd42763d5dbde124f77430f0887c793fd90204a0eada3dae6e88e8f7ae43f: Status 404 returned error can't find the container with id f1bbd42763d5dbde124f77430f0887c793fd90204a0eada3dae6e88e8f7ae43f Nov 23 15:01:40 crc kubenswrapper[4718]: I1123 15:01:40.895304 4718 generic.go:334] "Generic (PLEG): container finished" podID="3a1117c5-9775-4947-a666-60458548504a" containerID="e49fba38825102aec57ba0c1a961d0e52c4af31328f1d3dc18db6eea2dfd4906" exitCode=0 Nov 23 15:01:40 crc kubenswrapper[4718]: I1123 15:01:40.895378 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" event={"ID":"3a1117c5-9775-4947-a666-60458548504a","Type":"ContainerDied","Data":"e49fba38825102aec57ba0c1a961d0e52c4af31328f1d3dc18db6eea2dfd4906"} Nov 23 15:01:40 crc kubenswrapper[4718]: I1123 15:01:40.895965 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" event={"ID":"3a1117c5-9775-4947-a666-60458548504a","Type":"ContainerStarted","Data":"f1bbd42763d5dbde124f77430f0887c793fd90204a0eada3dae6e88e8f7ae43f"} Nov 23 15:01:41 crc kubenswrapper[4718]: I1123 15:01:41.909751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" event={"ID":"3a1117c5-9775-4947-a666-60458548504a","Type":"ContainerStarted","Data":"2d4a563811fe7fbc543e0829be5eda7fab31decd2d36a4bdd34b00798609fb89"} Nov 23 15:01:41 crc kubenswrapper[4718]: I1123 15:01:41.911366 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:41 crc kubenswrapper[4718]: I1123 15:01:41.949680 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" podStartSLOduration=2.949647864 podStartE2EDuration="2.949647864s" podCreationTimestamp="2025-11-23 15:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:41.944200886 +0000 UTC m=+953.183820790" watchObservedRunningTime="2025-11-23 15:01:41.949647864 +0000 UTC m=+953.189267718" Nov 23 15:01:42 crc kubenswrapper[4718]: I1123 15:01:42.920913 4718 generic.go:334] "Generic (PLEG): container finished" podID="60bc235d-395c-4e4b-be2a-26e2f7e7bf78" containerID="1cd1b3be77b6cf9f153c14c78030cccd0867fa5a7b0edcb5fc39c9551bd8fc78" exitCode=0 Nov 23 15:01:42 crc kubenswrapper[4718]: I1123 15:01:42.921163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfvb" event={"ID":"60bc235d-395c-4e4b-be2a-26e2f7e7bf78","Type":"ContainerDied","Data":"1cd1b3be77b6cf9f153c14c78030cccd0867fa5a7b0edcb5fc39c9551bd8fc78"} Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.269865 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.381154 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-combined-ca-bundle\") pod \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.381291 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-config-data\") pod \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.381399 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7pvt\" (UniqueName: \"kubernetes.io/projected/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-kube-api-access-b7pvt\") pod \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\" (UID: \"60bc235d-395c-4e4b-be2a-26e2f7e7bf78\") " Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.387830 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-kube-api-access-b7pvt" (OuterVolumeSpecName: "kube-api-access-b7pvt") pod "60bc235d-395c-4e4b-be2a-26e2f7e7bf78" (UID: "60bc235d-395c-4e4b-be2a-26e2f7e7bf78"). InnerVolumeSpecName "kube-api-access-b7pvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.404756 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60bc235d-395c-4e4b-be2a-26e2f7e7bf78" (UID: "60bc235d-395c-4e4b-be2a-26e2f7e7bf78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.446134 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-config-data" (OuterVolumeSpecName: "config-data") pod "60bc235d-395c-4e4b-be2a-26e2f7e7bf78" (UID: "60bc235d-395c-4e4b-be2a-26e2f7e7bf78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.483773 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7pvt\" (UniqueName: \"kubernetes.io/projected/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-kube-api-access-b7pvt\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.483816 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.483830 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc235d-395c-4e4b-be2a-26e2f7e7bf78-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.940204 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lfvb" event={"ID":"60bc235d-395c-4e4b-be2a-26e2f7e7bf78","Type":"ContainerDied","Data":"8ee805ee289f746fc657b90d71bff6137f933fe72bbff90de2219561447df4c7"} Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.940248 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee805ee289f746fc657b90d71bff6137f933fe72bbff90de2219561447df4c7" Nov 23 15:01:44 crc kubenswrapper[4718]: I1123 15:01:44.940288 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lfvb" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.210814 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-s65rw"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.211276 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" podUID="3a1117c5-9775-4947-a666-60458548504a" containerName="dnsmasq-dns" containerID="cri-o://2d4a563811fe7fbc543e0829be5eda7fab31decd2d36a4bdd34b00798609fb89" gracePeriod=10 Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.237076 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dtf42"] Nov 23 15:01:45 crc kubenswrapper[4718]: E1123 15:01:45.237801 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc235d-395c-4e4b-be2a-26e2f7e7bf78" containerName="keystone-db-sync" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.237825 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc235d-395c-4e4b-be2a-26e2f7e7bf78" containerName="keystone-db-sync" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.238031 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc235d-395c-4e4b-be2a-26e2f7e7bf78" containerName="keystone-db-sync" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.249324 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.252398 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dtf42"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.253312 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.253657 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.253842 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.254010 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xknxt" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.254192 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.262474 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-4s7r2"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.264044 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.329594 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-4s7r2"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407303 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-scripts\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407368 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-combined-ca-bundle\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407393 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfpc6\" (UniqueName: \"kubernetes.io/projected/dce0b886-6f37-41d3-9680-6ed8cc43aa55-kube-api-access-sfpc6\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407419 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407436 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407466 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407501 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-config\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407533 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-config-data\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407561 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-credential-keys\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407581 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-fernet-keys\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407614 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-svc\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.407633 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx5mx\" (UniqueName: \"kubernetes.io/projected/f4664676-6d14-425e-93d1-40c245ef5b06-kube-api-access-sx5mx\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.430014 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69c565788f-fjphh"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.431465 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.440058 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.440081 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vmg2f" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.440315 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.440525 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.452995 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bppgm"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.453972 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.456956 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.457177 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r5hj2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.457287 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.484824 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c565788f-fjphh"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509001 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fc14d-062b-4392-97e9-6125fb9b281a-logs\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-config\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509074 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c5fc14d-062b-4392-97e9-6125fb9b281a-horizon-secret-key\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509099 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-config-data\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509126 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-credential-keys\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509141 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcstn\" (UniqueName: \"kubernetes.io/projected/1c5fc14d-062b-4392-97e9-6125fb9b281a-kube-api-access-jcstn\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509158 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-scripts\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509176 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-fernet-keys\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509209 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-svc\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509227 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx5mx\" (UniqueName: \"kubernetes.io/projected/f4664676-6d14-425e-93d1-40c245ef5b06-kube-api-access-sx5mx\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-scripts\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509275 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-combined-ca-bundle\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509292 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfpc6\" (UniqueName: \"kubernetes.io/projected/dce0b886-6f37-41d3-9680-6ed8cc43aa55-kube-api-access-sfpc6\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509318 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-config-data\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509334 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509349 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.509363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.510261 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.511024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-config\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.511574 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-svc\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.513188 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.515986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-scripts\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.516332 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bppgm"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.517020 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.518220 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-config-data\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.518300 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-combined-ca-bundle\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.518336 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-fernet-keys\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.534505 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-credential-keys\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.591173 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx5mx\" (UniqueName: \"kubernetes.io/projected/f4664676-6d14-425e-93d1-40c245ef5b06-kube-api-access-sx5mx\") pod \"keystone-bootstrap-dtf42\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.603388 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfpc6\" (UniqueName: \"kubernetes.io/projected/dce0b886-6f37-41d3-9680-6ed8cc43aa55-kube-api-access-sfpc6\") pod \"dnsmasq-dns-5959f8865f-4s7r2\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615356 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-config-data\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fc14d-062b-4392-97e9-6125fb9b281a-logs\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615461 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c5fc14d-062b-4392-97e9-6125fb9b281a-horizon-secret-key\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615486 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-config\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615534 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcstn\" (UniqueName: \"kubernetes.io/projected/1c5fc14d-062b-4392-97e9-6125fb9b281a-kube-api-access-jcstn\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615555 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-scripts\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk4p7\" (UniqueName: \"kubernetes.io/projected/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-kube-api-access-rk4p7\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.615604 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-combined-ca-bundle\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.616260 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.616690 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-cfbmg"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.617658 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.618721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-scripts\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.618966 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fc14d-062b-4392-97e9-6125fb9b281a-logs\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.619310 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-config-data\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.627055 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.627539 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j8z9d" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.628254 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.640042 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c5fc14d-062b-4392-97e9-6125fb9b281a-horizon-secret-key\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.646461 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cfbmg"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.679272 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.685085 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcstn\" (UniqueName: \"kubernetes.io/projected/1c5fc14d-062b-4392-97e9-6125fb9b281a-kube-api-access-jcstn\") pod \"horizon-69c565788f-fjphh\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.703817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.710801 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.710980 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.717865 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-db-sync-config-data\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718017 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-config\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718096 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-config-data\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718201 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk4p7\" (UniqueName: \"kubernetes.io/projected/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-kube-api-access-rk4p7\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718272 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-combined-ca-bundle\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718344 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-combined-ca-bundle\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718452 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnclm\" (UniqueName: \"kubernetes.io/projected/2de4e428-ba8b-43d2-893d-e6f020997e5b-kube-api-access-jnclm\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718538 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de4e428-ba8b-43d2-893d-e6f020997e5b-etc-machine-id\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.718655 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-scripts\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.729989 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-config\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.744077 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-combined-ca-bundle\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.760683 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.762871 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.765159 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk4p7\" (UniqueName: \"kubernetes.io/projected/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-kube-api-access-rk4p7\") pod \"neutron-db-sync-bppgm\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.778905 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bppgm" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.797737 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2rx99"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.799061 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.805950 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.806193 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4fxdr" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.810213 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2m7cd"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.811211 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.817263 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.819009 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ckbwr" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821544 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-combined-ca-bundle\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-config-data\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821655 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821684 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnclm\" (UniqueName: \"kubernetes.io/projected/2de4e428-ba8b-43d2-893d-e6f020997e5b-kube-api-access-jnclm\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821710 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de4e428-ba8b-43d2-893d-e6f020997e5b-etc-machine-id\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821748 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4b4\" (UniqueName: \"kubernetes.io/projected/b9b11789-7642-4d03-a060-26842da8ab4b-kube-api-access-tl4b4\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821767 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821792 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821819 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-scripts\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821849 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-scripts\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821888 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-db-sync-config-data\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821934 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.821986 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-config-data\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.827882 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de4e428-ba8b-43d2-893d-e6f020997e5b-etc-machine-id\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.832516 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.852175 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-4s7r2"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.852298 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-db-sync-config-data\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.856007 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-868dfc8f69-dxnq8"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.858091 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.879466 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.888341 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-combined-ca-bundle\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.889158 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-config-data\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.891973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-scripts\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.898931 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnclm\" (UniqueName: \"kubernetes.io/projected/2de4e428-ba8b-43d2-893d-e6f020997e5b-kube-api-access-jnclm\") pod \"cinder-db-sync-cfbmg\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.906792 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2m7cd"] Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.927822 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4b4\" (UniqueName: \"kubernetes.io/projected/b9b11789-7642-4d03-a060-26842da8ab4b-kube-api-access-tl4b4\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.927882 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.927910 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.927939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-scripts\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.927993 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88793089-cfde-48d5-8670-880344ab6711-logs\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928024 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzfw\" (UniqueName: \"kubernetes.io/projected/2456d901-f349-4f7a-b9cc-63c9eba428c8-kube-api-access-9zzfw\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928053 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928085 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-db-sync-config-data\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928135 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-config-data\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928159 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-combined-ca-bundle\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928186 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-combined-ca-bundle\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928207 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfv92\" (UniqueName: \"kubernetes.io/projected/88793089-cfde-48d5-8670-880344ab6711-kube-api-access-wfv92\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928232 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-config-data\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.928281 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-scripts\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.929676 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.929951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.943961 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-config-data\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.944573 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.944891 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.962293 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-scripts\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:45 crc kubenswrapper[4718]: I1123 15:01:45.996849 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-vsjz5"] Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.000657 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.001702 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4b4\" (UniqueName: \"kubernetes.io/projected/b9b11789-7642-4d03-a060-26842da8ab4b-kube-api-access-tl4b4\") pod \"ceilometer-0\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " pod="openstack/ceilometer-0" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.023196 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-868dfc8f69-dxnq8"] Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.040757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-combined-ca-bundle\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.040789 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9f6\" (UniqueName: \"kubernetes.io/projected/b751d852-bc02-41e5-9937-8590d330b58d-kube-api-access-zg9f6\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.040810 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfv92\" (UniqueName: \"kubernetes.io/projected/88793089-cfde-48d5-8670-880344ab6711-kube-api-access-wfv92\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.040864 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b751d852-bc02-41e5-9937-8590d330b58d-horizon-secret-key\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.040886 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-scripts\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.040957 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b751d852-bc02-41e5-9937-8590d330b58d-logs\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.041017 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88793089-cfde-48d5-8670-880344ab6711-logs\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.041053 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzfw\" (UniqueName: \"kubernetes.io/projected/2456d901-f349-4f7a-b9cc-63c9eba428c8-kube-api-access-9zzfw\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.041121 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-scripts\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.041143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-db-sync-config-data\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.041171 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-config-data\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.041242 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-config-data\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.041275 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-combined-ca-bundle\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.044273 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88793089-cfde-48d5-8670-880344ab6711-logs\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.048246 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-scripts\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.048945 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-combined-ca-bundle\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.060232 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-db-sync-config-data\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.060643 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-config-data\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.064222 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2rx99"] Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.066097 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-combined-ca-bundle\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.066169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzfw\" (UniqueName: \"kubernetes.io/projected/2456d901-f349-4f7a-b9cc-63c9eba428c8-kube-api-access-9zzfw\") pod \"barbican-db-sync-2rx99\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.077065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfv92\" (UniqueName: \"kubernetes.io/projected/88793089-cfde-48d5-8670-880344ab6711-kube-api-access-wfv92\") pod \"placement-db-sync-2m7cd\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.088237 4718 generic.go:334] "Generic (PLEG): container finished" podID="3a1117c5-9775-4947-a666-60458548504a" containerID="2d4a563811fe7fbc543e0829be5eda7fab31decd2d36a4bdd34b00798609fb89" exitCode=0 Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.088279 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" event={"ID":"3a1117c5-9775-4947-a666-60458548504a","Type":"ContainerDied","Data":"2d4a563811fe7fbc543e0829be5eda7fab31decd2d36a4bdd34b00798609fb89"} Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.098222 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-vsjz5"] Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.117917 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.142811 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143124 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-config\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143147 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b751d852-bc02-41e5-9937-8590d330b58d-logs\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143170 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143202 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mf8t\" (UniqueName: \"kubernetes.io/projected/99ae4906-9b02-471f-ab42-aa4dc6ba017d-kube-api-access-6mf8t\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143239 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143263 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-scripts\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143282 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-config-data\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143313 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143346 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg9f6\" (UniqueName: \"kubernetes.io/projected/b751d852-bc02-41e5-9937-8590d330b58d-kube-api-access-zg9f6\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.143375 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b751d852-bc02-41e5-9937-8590d330b58d-horizon-secret-key\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.144427 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b751d852-bc02-41e5-9937-8590d330b58d-logs\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.144759 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-scripts\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.145345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-config-data\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.156619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b751d852-bc02-41e5-9937-8590d330b58d-horizon-secret-key\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.167343 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.168757 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg9f6\" (UniqueName: \"kubernetes.io/projected/b751d852-bc02-41e5-9937-8590d330b58d-kube-api-access-zg9f6\") pod \"horizon-868dfc8f69-dxnq8\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.199807 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2rx99" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.227816 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2m7cd" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.244590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.244661 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.244740 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.244757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-config\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.244788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.244815 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mf8t\" (UniqueName: \"kubernetes.io/projected/99ae4906-9b02-471f-ab42-aa4dc6ba017d-kube-api-access-6mf8t\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.245869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.245923 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-config\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.246078 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.246791 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.247268 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.247559 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.274420 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mf8t\" (UniqueName: \"kubernetes.io/projected/99ae4906-9b02-471f-ab42-aa4dc6ba017d-kube-api-access-6mf8t\") pod \"dnsmasq-dns-58dd9ff6bc-vsjz5\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.356965 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.474366 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-4s7r2"] Nov 23 15:01:46 crc kubenswrapper[4718]: W1123 15:01:46.518606 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce0b886_6f37_41d3_9680_6ed8cc43aa55.slice/crio-4b91ee0849fc40c8e251aa6cf6e95506a19ce39b151193810d27bd4d8431a2c8 WatchSource:0}: Error finding container 4b91ee0849fc40c8e251aa6cf6e95506a19ce39b151193810d27bd4d8431a2c8: Status 404 returned error can't find the container with id 4b91ee0849fc40c8e251aa6cf6e95506a19ce39b151193810d27bd4d8431a2c8 Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.518648 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c565788f-fjphh"] Nov 23 15:01:46 crc kubenswrapper[4718]: W1123 15:01:46.687477 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a3b7bc7_1950_41d8_998b_b8ca6a7f6e58.slice/crio-ff5448ff258c01cdc17bf5c7928d4cf486a52232632fc4a0a0acf50e781b745e WatchSource:0}: Error finding container ff5448ff258c01cdc17bf5c7928d4cf486a52232632fc4a0a0acf50e781b745e: Status 404 returned error can't find the container with id ff5448ff258c01cdc17bf5c7928d4cf486a52232632fc4a0a0acf50e781b745e Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.690848 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bppgm"] Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.694915 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dtf42"] Nov 23 15:01:46 crc kubenswrapper[4718]: W1123 15:01:46.701254 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4664676_6d14_425e_93d1_40c245ef5b06.slice/crio-87233dbde4f2ffa017edd89c0714b74b569d69761486618b06209e1945ed9d30 WatchSource:0}: Error finding container 87233dbde4f2ffa017edd89c0714b74b569d69761486618b06209e1945ed9d30: Status 404 returned error can't find the container with id 87233dbde4f2ffa017edd89c0714b74b569d69761486618b06209e1945ed9d30 Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.701405 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cfbmg"] Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.866860 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.941818 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2rx99"] Nov 23 15:01:46 crc kubenswrapper[4718]: W1123 15:01:46.950320 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2456d901_f349_4f7a_b9cc_63c9eba428c8.slice/crio-e2dbd12bc6adc1b71ea1cf24735d3ebdae298726837ec54d6e3c199dc37b24bd WatchSource:0}: Error finding container e2dbd12bc6adc1b71ea1cf24735d3ebdae298726837ec54d6e3c199dc37b24bd: Status 404 returned error can't find the container with id e2dbd12bc6adc1b71ea1cf24735d3ebdae298726837ec54d6e3c199dc37b24bd Nov 23 15:01:46 crc kubenswrapper[4718]: I1123 15:01:46.950653 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-868dfc8f69-dxnq8"] Nov 23 15:01:46 crc kubenswrapper[4718]: W1123 15:01:46.953679 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb751d852_bc02_41e5_9937_8590d330b58d.slice/crio-37a2392fae1780b0d914fd2cad1e9824ae619d9afba3223ee8fc97acea27cd41 WatchSource:0}: Error finding container 37a2392fae1780b0d914fd2cad1e9824ae619d9afba3223ee8fc97acea27cd41: Status 404 returned error can't find the container with id 37a2392fae1780b0d914fd2cad1e9824ae619d9afba3223ee8fc97acea27cd41 Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.043201 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2m7cd"] Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.048465 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-vsjz5"] Nov 23 15:01:47 crc kubenswrapper[4718]: W1123 15:01:47.052980 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88793089_cfde_48d5_8670_880344ab6711.slice/crio-49a7d8fe5faa0e56e78f681f1ef5d4030db2e3fefc3e8a722828326af76c9a19 WatchSource:0}: Error finding container 49a7d8fe5faa0e56e78f681f1ef5d4030db2e3fefc3e8a722828326af76c9a19: Status 404 returned error can't find the container with id 49a7d8fe5faa0e56e78f681f1ef5d4030db2e3fefc3e8a722828326af76c9a19 Nov 23 15:01:47 crc kubenswrapper[4718]: W1123 15:01:47.055610 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ae4906_9b02_471f_ab42_aa4dc6ba017d.slice/crio-afd503628a30ba76d4305c05c6942d00cf65d47eb14e7d66f0cbc1e81f046920 WatchSource:0}: Error finding container afd503628a30ba76d4305c05c6942d00cf65d47eb14e7d66f0cbc1e81f046920: Status 404 returned error can't find the container with id afd503628a30ba76d4305c05c6942d00cf65d47eb14e7d66f0cbc1e81f046920 Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.108376 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bppgm" event={"ID":"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58","Type":"ContainerStarted","Data":"ff5448ff258c01cdc17bf5c7928d4cf486a52232632fc4a0a0acf50e781b745e"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.110422 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c565788f-fjphh" event={"ID":"1c5fc14d-062b-4392-97e9-6125fb9b281a","Type":"ContainerStarted","Data":"0af420eab7b9f2c90c8e111b6abf921ebc1a035988a11e3ea63b396e63f8de77"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.111556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-868dfc8f69-dxnq8" event={"ID":"b751d852-bc02-41e5-9937-8590d330b58d","Type":"ContainerStarted","Data":"37a2392fae1780b0d914fd2cad1e9824ae619d9afba3223ee8fc97acea27cd41"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.112611 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2m7cd" event={"ID":"88793089-cfde-48d5-8670-880344ab6711","Type":"ContainerStarted","Data":"49a7d8fe5faa0e56e78f681f1ef5d4030db2e3fefc3e8a722828326af76c9a19"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.113685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2rx99" event={"ID":"2456d901-f349-4f7a-b9cc-63c9eba428c8","Type":"ContainerStarted","Data":"e2dbd12bc6adc1b71ea1cf24735d3ebdae298726837ec54d6e3c199dc37b24bd"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.114639 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" event={"ID":"dce0b886-6f37-41d3-9680-6ed8cc43aa55","Type":"ContainerStarted","Data":"4b91ee0849fc40c8e251aa6cf6e95506a19ce39b151193810d27bd4d8431a2c8"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.115456 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerStarted","Data":"89b5a2823cc325082a8b7513c0810c7970d300df77dd8a8060b55c7652dac76b"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.116178 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cfbmg" event={"ID":"2de4e428-ba8b-43d2-893d-e6f020997e5b","Type":"ContainerStarted","Data":"4214748061dedd11c505dc9751733a8d14d47fc28c98da134f548b0456972e12"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.116985 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" event={"ID":"99ae4906-9b02-471f-ab42-aa4dc6ba017d","Type":"ContainerStarted","Data":"afd503628a30ba76d4305c05c6942d00cf65d47eb14e7d66f0cbc1e81f046920"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.117820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dtf42" event={"ID":"f4664676-6d14-425e-93d1-40c245ef5b06","Type":"ContainerStarted","Data":"87233dbde4f2ffa017edd89c0714b74b569d69761486618b06209e1945ed9d30"} Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.322339 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.334497 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c565788f-fjphh"] Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.366562 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5447c9669f-r2wq4"] Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.368075 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.378469 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5447c9669f-r2wq4"] Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.497911 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-config-data\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.497950 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f96c\" (UniqueName: \"kubernetes.io/projected/ffcd0872-12a2-4dc9-bfde-22681ed5212f-kube-api-access-7f96c\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.498014 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffcd0872-12a2-4dc9-bfde-22681ed5212f-horizon-secret-key\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.498126 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffcd0872-12a2-4dc9-bfde-22681ed5212f-logs\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.498231 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-scripts\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.600049 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffcd0872-12a2-4dc9-bfde-22681ed5212f-horizon-secret-key\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.600405 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffcd0872-12a2-4dc9-bfde-22681ed5212f-logs\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.600432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-scripts\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.600698 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-config-data\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.600952 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffcd0872-12a2-4dc9-bfde-22681ed5212f-logs\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.601326 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-scripts\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.602363 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-config-data\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.603932 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f96c\" (UniqueName: \"kubernetes.io/projected/ffcd0872-12a2-4dc9-bfde-22681ed5212f-kube-api-access-7f96c\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.606306 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffcd0872-12a2-4dc9-bfde-22681ed5212f-horizon-secret-key\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.626806 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f96c\" (UniqueName: \"kubernetes.io/projected/ffcd0872-12a2-4dc9-bfde-22681ed5212f-kube-api-access-7f96c\") pod \"horizon-5447c9669f-r2wq4\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:47 crc kubenswrapper[4718]: I1123 15:01:47.715377 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:01:48 crc kubenswrapper[4718]: I1123 15:01:48.165020 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5447c9669f-r2wq4"] Nov 23 15:01:48 crc kubenswrapper[4718]: W1123 15:01:48.165994 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcd0872_12a2_4dc9_bfde_22681ed5212f.slice/crio-dba514b4a6e404165512d7c2a6d221a4f607a3c8e18bc9d98a939bde12391cb4 WatchSource:0}: Error finding container dba514b4a6e404165512d7c2a6d221a4f607a3c8e18bc9d98a939bde12391cb4: Status 404 returned error can't find the container with id dba514b4a6e404165512d7c2a6d221a4f607a3c8e18bc9d98a939bde12391cb4 Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.139928 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5447c9669f-r2wq4" event={"ID":"ffcd0872-12a2-4dc9-bfde-22681ed5212f","Type":"ContainerStarted","Data":"dba514b4a6e404165512d7c2a6d221a4f607a3c8e18bc9d98a939bde12391cb4"} Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.516837 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" podUID="3a1117c5-9775-4947-a666-60458548504a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.783464 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.843970 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-sb\") pod \"3a1117c5-9775-4947-a666-60458548504a\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.844146 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-svc\") pod \"3a1117c5-9775-4947-a666-60458548504a\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.928216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a1117c5-9775-4947-a666-60458548504a" (UID: "3a1117c5-9775-4947-a666-60458548504a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.943324 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a1117c5-9775-4947-a666-60458548504a" (UID: "3a1117c5-9775-4947-a666-60458548504a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.945162 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-config\") pod \"3a1117c5-9775-4947-a666-60458548504a\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.945245 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-swift-storage-0\") pod \"3a1117c5-9775-4947-a666-60458548504a\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.945272 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4l6h\" (UniqueName: \"kubernetes.io/projected/3a1117c5-9775-4947-a666-60458548504a-kube-api-access-h4l6h\") pod \"3a1117c5-9775-4947-a666-60458548504a\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.945299 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-nb\") pod \"3a1117c5-9775-4947-a666-60458548504a\" (UID: \"3a1117c5-9775-4947-a666-60458548504a\") " Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.945554 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.945577 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.949611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1117c5-9775-4947-a666-60458548504a-kube-api-access-h4l6h" (OuterVolumeSpecName: "kube-api-access-h4l6h") pod "3a1117c5-9775-4947-a666-60458548504a" (UID: "3a1117c5-9775-4947-a666-60458548504a"). InnerVolumeSpecName "kube-api-access-h4l6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:49 crc kubenswrapper[4718]: I1123 15:01:49.999375 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a1117c5-9775-4947-a666-60458548504a" (UID: "3a1117c5-9775-4947-a666-60458548504a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.003752 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-config" (OuterVolumeSpecName: "config") pod "3a1117c5-9775-4947-a666-60458548504a" (UID: "3a1117c5-9775-4947-a666-60458548504a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.026675 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a1117c5-9775-4947-a666-60458548504a" (UID: "3a1117c5-9775-4947-a666-60458548504a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.047570 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.047603 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.047616 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a1117c5-9775-4947-a666-60458548504a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.047624 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4l6h\" (UniqueName: \"kubernetes.io/projected/3a1117c5-9775-4947-a666-60458548504a-kube-api-access-h4l6h\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.171027 4718 generic.go:334] "Generic (PLEG): container finished" podID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerID="114ebe8142a12147bb58d0ef0988ab3edaf86bbe0f88b002b1ee8ac6874a88b0" exitCode=0 Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.171086 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" event={"ID":"99ae4906-9b02-471f-ab42-aa4dc6ba017d","Type":"ContainerDied","Data":"114ebe8142a12147bb58d0ef0988ab3edaf86bbe0f88b002b1ee8ac6874a88b0"} Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.175316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dtf42" event={"ID":"f4664676-6d14-425e-93d1-40c245ef5b06","Type":"ContainerStarted","Data":"2820e29c4f114c25ae7affc037dfd61dbe6ddc97552d4efe172c435a40f49260"} Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.180236 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" event={"ID":"3a1117c5-9775-4947-a666-60458548504a","Type":"ContainerDied","Data":"f1bbd42763d5dbde124f77430f0887c793fd90204a0eada3dae6e88e8f7ae43f"} Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.180405 4718 scope.go:117] "RemoveContainer" containerID="2d4a563811fe7fbc543e0829be5eda7fab31decd2d36a4bdd34b00798609fb89" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.180621 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-s65rw" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.185847 4718 generic.go:334] "Generic (PLEG): container finished" podID="dce0b886-6f37-41d3-9680-6ed8cc43aa55" containerID="bff6a68c17b0e6e15ff5b06c64592d8facc299d59adefcfc70727983bc87804b" exitCode=0 Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.186770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" event={"ID":"dce0b886-6f37-41d3-9680-6ed8cc43aa55","Type":"ContainerDied","Data":"bff6a68c17b0e6e15ff5b06c64592d8facc299d59adefcfc70727983bc87804b"} Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.223375 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bppgm" event={"ID":"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58","Type":"ContainerStarted","Data":"ecc8668f1c44c21f2579dc3d2c0c6c98e4b067637d0542bebce13521bd1fe8e1"} Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.239852 4718 scope.go:117] "RemoveContainer" containerID="e49fba38825102aec57ba0c1a961d0e52c4af31328f1d3dc18db6eea2dfd4906" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.255295 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dtf42" podStartSLOduration=5.255257349 podStartE2EDuration="5.255257349s" podCreationTimestamp="2025-11-23 15:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:50.252159135 +0000 UTC m=+961.491778989" watchObservedRunningTime="2025-11-23 15:01:50.255257349 +0000 UTC m=+961.494877193" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.281044 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bppgm" podStartSLOduration=5.28102728 podStartE2EDuration="5.28102728s" podCreationTimestamp="2025-11-23 15:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:50.277507634 +0000 UTC m=+961.517127478" watchObservedRunningTime="2025-11-23 15:01:50.28102728 +0000 UTC m=+961.520647114" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.297289 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-s65rw"] Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.308198 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-s65rw"] Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.491766 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1117c5-9775-4947-a666-60458548504a" path="/var/lib/kubelet/pods/3a1117c5-9775-4947-a666-60458548504a/volumes" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.549605 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.565333 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-svc\") pod \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.565433 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-swift-storage-0\") pod \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.565544 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfpc6\" (UniqueName: \"kubernetes.io/projected/dce0b886-6f37-41d3-9680-6ed8cc43aa55-kube-api-access-sfpc6\") pod \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.565602 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-nb\") pod \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.565658 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-config\") pod \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.565765 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-sb\") pod \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\" (UID: \"dce0b886-6f37-41d3-9680-6ed8cc43aa55\") " Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.585349 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce0b886-6f37-41d3-9680-6ed8cc43aa55-kube-api-access-sfpc6" (OuterVolumeSpecName: "kube-api-access-sfpc6") pod "dce0b886-6f37-41d3-9680-6ed8cc43aa55" (UID: "dce0b886-6f37-41d3-9680-6ed8cc43aa55"). InnerVolumeSpecName "kube-api-access-sfpc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.592909 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dce0b886-6f37-41d3-9680-6ed8cc43aa55" (UID: "dce0b886-6f37-41d3-9680-6ed8cc43aa55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.594830 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-config" (OuterVolumeSpecName: "config") pod "dce0b886-6f37-41d3-9680-6ed8cc43aa55" (UID: "dce0b886-6f37-41d3-9680-6ed8cc43aa55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.601256 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dce0b886-6f37-41d3-9680-6ed8cc43aa55" (UID: "dce0b886-6f37-41d3-9680-6ed8cc43aa55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.615115 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dce0b886-6f37-41d3-9680-6ed8cc43aa55" (UID: "dce0b886-6f37-41d3-9680-6ed8cc43aa55"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.624394 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dce0b886-6f37-41d3-9680-6ed8cc43aa55" (UID: "dce0b886-6f37-41d3-9680-6ed8cc43aa55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.668826 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.668866 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.668877 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfpc6\" (UniqueName: \"kubernetes.io/projected/dce0b886-6f37-41d3-9680-6ed8cc43aa55-kube-api-access-sfpc6\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.668886 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.668933 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:50 crc kubenswrapper[4718]: I1123 15:01:50.668943 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dce0b886-6f37-41d3-9680-6ed8cc43aa55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.264186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" event={"ID":"dce0b886-6f37-41d3-9680-6ed8cc43aa55","Type":"ContainerDied","Data":"4b91ee0849fc40c8e251aa6cf6e95506a19ce39b151193810d27bd4d8431a2c8"} Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.264550 4718 scope.go:117] "RemoveContainer" containerID="bff6a68c17b0e6e15ff5b06c64592d8facc299d59adefcfc70727983bc87804b" Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.264214 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-4s7r2" Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.270107 4718 generic.go:334] "Generic (PLEG): container finished" podID="1935d5b8-9171-4708-9b50-4dbef79d106b" containerID="3d239e018f6426fde1107bd93f66bb487c96cb841c0024b1e617c5728cf62874" exitCode=0 Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.270171 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sz47m" event={"ID":"1935d5b8-9171-4708-9b50-4dbef79d106b","Type":"ContainerDied","Data":"3d239e018f6426fde1107bd93f66bb487c96cb841c0024b1e617c5728cf62874"} Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.318420 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" event={"ID":"99ae4906-9b02-471f-ab42-aa4dc6ba017d","Type":"ContainerStarted","Data":"2627492bac546c54242b430a1591f037988c63cfad20c9d768248ff487cdd295"} Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.318957 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.345133 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" podStartSLOduration=6.345112154 podStartE2EDuration="6.345112154s" podCreationTimestamp="2025-11-23 15:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:01:51.341579267 +0000 UTC m=+962.581199111" watchObservedRunningTime="2025-11-23 15:01:51.345112154 +0000 UTC m=+962.584731998" Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.408987 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-4s7r2"] Nov 23 15:01:51 crc kubenswrapper[4718]: I1123 15:01:51.418636 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-4s7r2"] Nov 23 15:01:52 crc kubenswrapper[4718]: I1123 15:01:52.455230 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce0b886-6f37-41d3-9680-6ed8cc43aa55" path="/var/lib/kubelet/pods/dce0b886-6f37-41d3-9680-6ed8cc43aa55/volumes" Nov 23 15:01:53 crc kubenswrapper[4718]: I1123 15:01:53.977671 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-868dfc8f69-dxnq8"] Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.013251 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5696759568-pxlzs"] Nov 23 15:01:54 crc kubenswrapper[4718]: E1123 15:01:54.013758 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce0b886-6f37-41d3-9680-6ed8cc43aa55" containerName="init" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.013784 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce0b886-6f37-41d3-9680-6ed8cc43aa55" containerName="init" Nov 23 15:01:54 crc kubenswrapper[4718]: E1123 15:01:54.013833 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1117c5-9775-4947-a666-60458548504a" containerName="dnsmasq-dns" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.013843 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1117c5-9775-4947-a666-60458548504a" containerName="dnsmasq-dns" Nov 23 15:01:54 crc kubenswrapper[4718]: E1123 15:01:54.013855 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1117c5-9775-4947-a666-60458548504a" containerName="init" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.013863 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1117c5-9775-4947-a666-60458548504a" containerName="init" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.014155 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1117c5-9775-4947-a666-60458548504a" containerName="dnsmasq-dns" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.014181 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce0b886-6f37-41d3-9680-6ed8cc43aa55" containerName="init" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.015357 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.019953 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.036365 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5696759568-pxlzs"] Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.041496 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-combined-ca-bundle\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.041561 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78kbg\" (UniqueName: \"kubernetes.io/projected/5dba5adf-299f-404c-91f9-c5848e9babe4-kube-api-access-78kbg\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.041596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-scripts\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.041658 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-config-data\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.041677 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-secret-key\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.041722 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dba5adf-299f-404c-91f9-c5848e9babe4-logs\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.041736 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-tls-certs\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.074148 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5447c9669f-r2wq4"] Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.099218 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65c9478d8d-nxsfl"] Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.101557 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.126616 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65c9478d8d-nxsfl"] Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145346 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-config-data\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145401 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-secret-key\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145429 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-config-data\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145488 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dk6d\" (UniqueName: \"kubernetes.io/projected/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-kube-api-access-8dk6d\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-logs\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145527 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-combined-ca-bundle\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dba5adf-299f-404c-91f9-c5848e9babe4-logs\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145565 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-tls-certs\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145586 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-horizon-secret-key\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145639 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-combined-ca-bundle\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145655 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-scripts\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145672 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78kbg\" (UniqueName: \"kubernetes.io/projected/5dba5adf-299f-404c-91f9-c5848e9babe4-kube-api-access-78kbg\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145693 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-scripts\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.145715 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-horizon-tls-certs\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.147419 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-scripts\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.147711 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-config-data\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.149746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dba5adf-299f-404c-91f9-c5848e9babe4-logs\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.161416 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-secret-key\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.170079 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-combined-ca-bundle\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.175099 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78kbg\" (UniqueName: \"kubernetes.io/projected/5dba5adf-299f-404c-91f9-c5848e9babe4-kube-api-access-78kbg\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.176740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-tls-certs\") pod \"horizon-5696759568-pxlzs\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.250127 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dk6d\" (UniqueName: \"kubernetes.io/projected/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-kube-api-access-8dk6d\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.250221 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-logs\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.250566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-combined-ca-bundle\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.250695 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-horizon-secret-key\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.250935 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-scripts\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.251256 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-logs\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.251493 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-horizon-tls-certs\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.251722 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-scripts\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.255037 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-horizon-tls-certs\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.255077 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-horizon-secret-key\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.255419 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-combined-ca-bundle\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.255989 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-config-data\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.257288 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-config-data\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.276420 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dk6d\" (UniqueName: \"kubernetes.io/projected/ebb73efe-fe18-4507-b723-d3dbf1d8ed91-kube-api-access-8dk6d\") pod \"horizon-65c9478d8d-nxsfl\" (UID: \"ebb73efe-fe18-4507-b723-d3dbf1d8ed91\") " pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.350754 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.430428 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.701257 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.764271 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdh8k\" (UniqueName: \"kubernetes.io/projected/1935d5b8-9171-4708-9b50-4dbef79d106b-kube-api-access-pdh8k\") pod \"1935d5b8-9171-4708-9b50-4dbef79d106b\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.764335 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-db-sync-config-data\") pod \"1935d5b8-9171-4708-9b50-4dbef79d106b\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.764433 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-combined-ca-bundle\") pod \"1935d5b8-9171-4708-9b50-4dbef79d106b\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.764500 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-config-data\") pod \"1935d5b8-9171-4708-9b50-4dbef79d106b\" (UID: \"1935d5b8-9171-4708-9b50-4dbef79d106b\") " Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.771489 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1935d5b8-9171-4708-9b50-4dbef79d106b" (UID: "1935d5b8-9171-4708-9b50-4dbef79d106b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.771533 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1935d5b8-9171-4708-9b50-4dbef79d106b-kube-api-access-pdh8k" (OuterVolumeSpecName: "kube-api-access-pdh8k") pod "1935d5b8-9171-4708-9b50-4dbef79d106b" (UID: "1935d5b8-9171-4708-9b50-4dbef79d106b"). InnerVolumeSpecName "kube-api-access-pdh8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.803633 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1935d5b8-9171-4708-9b50-4dbef79d106b" (UID: "1935d5b8-9171-4708-9b50-4dbef79d106b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.827432 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-config-data" (OuterVolumeSpecName: "config-data") pod "1935d5b8-9171-4708-9b50-4dbef79d106b" (UID: "1935d5b8-9171-4708-9b50-4dbef79d106b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.866208 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.866246 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.866256 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d5b8-9171-4708-9b50-4dbef79d106b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:54 crc kubenswrapper[4718]: I1123 15:01:54.866267 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdh8k\" (UniqueName: \"kubernetes.io/projected/1935d5b8-9171-4708-9b50-4dbef79d106b-kube-api-access-pdh8k\") on node \"crc\" DevicePath \"\"" Nov 23 15:01:55 crc kubenswrapper[4718]: I1123 15:01:55.356419 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4664676-6d14-425e-93d1-40c245ef5b06" containerID="2820e29c4f114c25ae7affc037dfd61dbe6ddc97552d4efe172c435a40f49260" exitCode=0 Nov 23 15:01:55 crc kubenswrapper[4718]: I1123 15:01:55.356516 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dtf42" event={"ID":"f4664676-6d14-425e-93d1-40c245ef5b06","Type":"ContainerDied","Data":"2820e29c4f114c25ae7affc037dfd61dbe6ddc97552d4efe172c435a40f49260"} Nov 23 15:01:55 crc kubenswrapper[4718]: I1123 15:01:55.358691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sz47m" event={"ID":"1935d5b8-9171-4708-9b50-4dbef79d106b","Type":"ContainerDied","Data":"ed616a9101b439d37e35ef56ed8036c3d33eb713e787e8eb1e8b7b86e836cd48"} Nov 23 15:01:55 crc kubenswrapper[4718]: I1123 15:01:55.358718 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed616a9101b439d37e35ef56ed8036c3d33eb713e787e8eb1e8b7b86e836cd48" Nov 23 15:01:55 crc kubenswrapper[4718]: I1123 15:01:55.358768 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sz47m" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.088374 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-vsjz5"] Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.091841 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" containerID="cri-o://2627492bac546c54242b430a1591f037988c63cfad20c9d768248ff487cdd295" gracePeriod=10 Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.094786 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.158261 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ztlxn"] Nov 23 15:01:56 crc kubenswrapper[4718]: E1123 15:01:56.158682 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1935d5b8-9171-4708-9b50-4dbef79d106b" containerName="glance-db-sync" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.158699 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1935d5b8-9171-4708-9b50-4dbef79d106b" containerName="glance-db-sync" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.158941 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1935d5b8-9171-4708-9b50-4dbef79d106b" containerName="glance-db-sync" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.159843 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.167173 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ztlxn"] Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.194041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdwb\" (UniqueName: \"kubernetes.io/projected/28f71e41-ae1d-43b8-a822-c6a56b5e8119-kube-api-access-2mdwb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.194082 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-config\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.194148 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.194166 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.194190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.194225 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.296232 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdwb\" (UniqueName: \"kubernetes.io/projected/28f71e41-ae1d-43b8-a822-c6a56b5e8119-kube-api-access-2mdwb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.296281 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-config\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.296462 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.296488 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.296520 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.296570 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.297330 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.297352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-config\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.298089 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.298135 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.298734 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.316592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdwb\" (UniqueName: \"kubernetes.io/projected/28f71e41-ae1d-43b8-a822-c6a56b5e8119-kube-api-access-2mdwb\") pod \"dnsmasq-dns-785d8bcb8c-ztlxn\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.358377 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.381482 4718 generic.go:334] "Generic (PLEG): container finished" podID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerID="2627492bac546c54242b430a1591f037988c63cfad20c9d768248ff487cdd295" exitCode=0 Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.381859 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" event={"ID":"99ae4906-9b02-471f-ab42-aa4dc6ba017d","Type":"ContainerDied","Data":"2627492bac546c54242b430a1591f037988c63cfad20c9d768248ff487cdd295"} Nov 23 15:01:56 crc kubenswrapper[4718]: I1123 15:01:56.494525 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.112507 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.114634 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.117144 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.117286 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nr9gc" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.122957 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.132639 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.204782 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.207182 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.210873 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.210972 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.211000 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.211059 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-config-data\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.211081 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frn68\" (UniqueName: \"kubernetes.io/projected/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-kube-api-access-frn68\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.211105 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-scripts\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.211154 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-logs\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.211247 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.237742 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315285 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-logs\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315337 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315418 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315752 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4kfn\" (UniqueName: \"kubernetes.io/projected/ba72af14-c7bb-49fd-8838-97542eee727f-kube-api-access-d4kfn\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315867 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-logs\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315885 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315929 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315957 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.315985 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.316019 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.316064 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frn68\" (UniqueName: \"kubernetes.io/projected/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-kube-api-access-frn68\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.316086 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-config-data\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.316112 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-scripts\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.317100 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.335987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-scripts\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.345869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.347537 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-config-data\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.365626 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frn68\" (UniqueName: \"kubernetes.io/projected/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-kube-api-access-frn68\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.367713 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.420865 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.420954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.420976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.420997 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4kfn\" (UniqueName: \"kubernetes.io/projected/ba72af14-c7bb-49fd-8838-97542eee727f-kube-api-access-d4kfn\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.421029 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.421044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.421067 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.423951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.431121 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.431229 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.432489 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.434791 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.438868 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.446125 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.473266 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4kfn\" (UniqueName: \"kubernetes.io/projected/ba72af14-c7bb-49fd-8838-97542eee727f-kube-api-access-d4kfn\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.506882 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:01:57 crc kubenswrapper[4718]: I1123 15:01:57.526583 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:01:58 crc kubenswrapper[4718]: I1123 15:01:58.781950 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:01:58 crc kubenswrapper[4718]: I1123 15:01:58.851665 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:02:01 crc kubenswrapper[4718]: I1123 15:02:01.359148 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 23 15:02:04 crc kubenswrapper[4718]: E1123 15:02:04.971858 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 23 15:02:04 crc kubenswrapper[4718]: E1123 15:02:04.972996 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfv92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2m7cd_openstack(88793089-cfde-48d5-8670-880344ab6711): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:02:04 crc kubenswrapper[4718]: E1123 15:02:04.974277 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2m7cd" podUID="88793089-cfde-48d5-8670-880344ab6711" Nov 23 15:02:05 crc kubenswrapper[4718]: E1123 15:02:05.021839 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 23 15:02:05 crc kubenswrapper[4718]: E1123 15:02:05.022013 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n595hffhbbh55bh657h5chc6h68h5f5h7dh564hb6h8h5b8h55bh588h659h656h598h584h685h576h89h689h596h5f7h66bh696h76h87h5d9h68cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg9f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-868dfc8f69-dxnq8_openstack(b751d852-bc02-41e5-9937-8590d330b58d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:02:05 crc kubenswrapper[4718]: E1123 15:02:05.025990 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-868dfc8f69-dxnq8" podUID="b751d852-bc02-41e5-9937-8590d330b58d" Nov 23 15:02:05 crc kubenswrapper[4718]: E1123 15:02:05.046666 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 23 15:02:05 crc kubenswrapper[4718]: E1123 15:02:05.047038 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc4h6dh657h5bfh67fh5bbh598hfh588hc5h7fh574hc6h5c7h9h5fch7ch5c4h7chdbh9h6dhbh5fch7bh4h97hc8hbdh5f9h55ch589q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcstn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-69c565788f-fjphh_openstack(1c5fc14d-062b-4392-97e9-6125fb9b281a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:02:05 crc kubenswrapper[4718]: E1123 15:02:05.049493 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-69c565788f-fjphh" podUID="1c5fc14d-062b-4392-97e9-6125fb9b281a" Nov 23 15:02:05 crc kubenswrapper[4718]: E1123 15:02:05.475266 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2m7cd" podUID="88793089-cfde-48d5-8670-880344ab6711" Nov 23 15:02:06 crc kubenswrapper[4718]: I1123 15:02:06.359384 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 23 15:02:06 crc kubenswrapper[4718]: I1123 15:02:06.359885 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:02:07 crc kubenswrapper[4718]: E1123 15:02:07.363606 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 23 15:02:07 crc kubenswrapper[4718]: E1123 15:02:07.363742 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zzfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2rx99_openstack(2456d901-f349-4f7a-b9cc-63c9eba428c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:02:07 crc kubenswrapper[4718]: E1123 15:02:07.365897 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2rx99" podUID="2456d901-f349-4f7a-b9cc-63c9eba428c8" Nov 23 15:02:07 crc kubenswrapper[4718]: E1123 15:02:07.491475 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2rx99" podUID="2456d901-f349-4f7a-b9cc-63c9eba428c8" Nov 23 15:02:07 crc kubenswrapper[4718]: E1123 15:02:07.861281 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 23 15:02:07 crc kubenswrapper[4718]: E1123 15:02:07.861785 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h5f6h564hf7h674h58ch589h696h56h65fh677h9ch7ch9bh54ch58dh54dh68dh644h685h646h98h598h5cdh584h56ch5cch58bh98h558h65h599q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl4b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b9b11789-7642-4d03-a060-26842da8ab4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:02:07 crc kubenswrapper[4718]: I1123 15:02:07.878395 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.024363 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-combined-ca-bundle\") pod \"f4664676-6d14-425e-93d1-40c245ef5b06\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.024411 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-config-data\") pod \"f4664676-6d14-425e-93d1-40c245ef5b06\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.024447 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-credential-keys\") pod \"f4664676-6d14-425e-93d1-40c245ef5b06\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.024524 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-fernet-keys\") pod \"f4664676-6d14-425e-93d1-40c245ef5b06\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.024634 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-scripts\") pod \"f4664676-6d14-425e-93d1-40c245ef5b06\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.024709 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx5mx\" (UniqueName: \"kubernetes.io/projected/f4664676-6d14-425e-93d1-40c245ef5b06-kube-api-access-sx5mx\") pod \"f4664676-6d14-425e-93d1-40c245ef5b06\" (UID: \"f4664676-6d14-425e-93d1-40c245ef5b06\") " Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.030940 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f4664676-6d14-425e-93d1-40c245ef5b06" (UID: "f4664676-6d14-425e-93d1-40c245ef5b06"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.031012 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-scripts" (OuterVolumeSpecName: "scripts") pod "f4664676-6d14-425e-93d1-40c245ef5b06" (UID: "f4664676-6d14-425e-93d1-40c245ef5b06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.031025 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f4664676-6d14-425e-93d1-40c245ef5b06" (UID: "f4664676-6d14-425e-93d1-40c245ef5b06"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.031059 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4664676-6d14-425e-93d1-40c245ef5b06-kube-api-access-sx5mx" (OuterVolumeSpecName: "kube-api-access-sx5mx") pod "f4664676-6d14-425e-93d1-40c245ef5b06" (UID: "f4664676-6d14-425e-93d1-40c245ef5b06"). InnerVolumeSpecName "kube-api-access-sx5mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.050365 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4664676-6d14-425e-93d1-40c245ef5b06" (UID: "f4664676-6d14-425e-93d1-40c245ef5b06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.081269 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-config-data" (OuterVolumeSpecName: "config-data") pod "f4664676-6d14-425e-93d1-40c245ef5b06" (UID: "f4664676-6d14-425e-93d1-40c245ef5b06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.126217 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.126254 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx5mx\" (UniqueName: \"kubernetes.io/projected/f4664676-6d14-425e-93d1-40c245ef5b06-kube-api-access-sx5mx\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.126287 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.126296 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.126306 4718 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.126314 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4664676-6d14-425e-93d1-40c245ef5b06-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.499199 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dtf42" event={"ID":"f4664676-6d14-425e-93d1-40c245ef5b06","Type":"ContainerDied","Data":"87233dbde4f2ffa017edd89c0714b74b569d69761486618b06209e1945ed9d30"} Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.499511 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87233dbde4f2ffa017edd89c0714b74b569d69761486618b06209e1945ed9d30" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.499583 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dtf42" Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.967596 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dtf42"] Nov 23 15:02:08 crc kubenswrapper[4718]: I1123 15:02:08.974561 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dtf42"] Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.076846 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j62px"] Nov 23 15:02:09 crc kubenswrapper[4718]: E1123 15:02:09.077546 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4664676-6d14-425e-93d1-40c245ef5b06" containerName="keystone-bootstrap" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.077643 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4664676-6d14-425e-93d1-40c245ef5b06" containerName="keystone-bootstrap" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.077891 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4664676-6d14-425e-93d1-40c245ef5b06" containerName="keystone-bootstrap" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.079892 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.086884 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.086942 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.086942 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.087092 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xknxt" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.089239 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j62px"] Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.095521 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.245079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-config-data\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.245209 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-combined-ca-bundle\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.245733 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-fernet-keys\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.245846 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-scripts\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.245969 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ldd\" (UniqueName: \"kubernetes.io/projected/ef41e996-1748-4b67-ad0c-1b7481290391-kube-api-access-g2ldd\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.246006 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-credential-keys\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.347559 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-config-data\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.347639 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-combined-ca-bundle\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.347689 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-fernet-keys\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.347736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-scripts\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.347785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ldd\" (UniqueName: \"kubernetes.io/projected/ef41e996-1748-4b67-ad0c-1b7481290391-kube-api-access-g2ldd\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.347805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-credential-keys\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.352380 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-fernet-keys\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.352879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-credential-keys\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.353584 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-config-data\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.353638 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-combined-ca-bundle\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.354351 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-scripts\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.365125 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ldd\" (UniqueName: \"kubernetes.io/projected/ef41e996-1748-4b67-ad0c-1b7481290391-kube-api-access-g2ldd\") pod \"keystone-bootstrap-j62px\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:09 crc kubenswrapper[4718]: I1123 15:02:09.401828 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:10 crc kubenswrapper[4718]: I1123 15:02:10.452127 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4664676-6d14-425e-93d1-40c245ef5b06" path="/var/lib/kubelet/pods/f4664676-6d14-425e-93d1-40c245ef5b06/volumes" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.082863 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.091233 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.098920 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153431 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b751d852-bc02-41e5-9937-8590d330b58d-logs\") pod \"b751d852-bc02-41e5-9937-8590d330b58d\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-scripts\") pod \"1c5fc14d-062b-4392-97e9-6125fb9b281a\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153561 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-scripts\") pod \"b751d852-bc02-41e5-9937-8590d330b58d\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153579 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-config-data\") pod \"b751d852-bc02-41e5-9937-8590d330b58d\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153626 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-swift-storage-0\") pod \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153648 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-config-data\") pod \"1c5fc14d-062b-4392-97e9-6125fb9b281a\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153669 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-sb\") pod \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153715 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-config\") pod \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153760 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-svc\") pod \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153781 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mf8t\" (UniqueName: \"kubernetes.io/projected/99ae4906-9b02-471f-ab42-aa4dc6ba017d-kube-api-access-6mf8t\") pod \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153847 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fc14d-062b-4392-97e9-6125fb9b281a-logs\") pod \"1c5fc14d-062b-4392-97e9-6125fb9b281a\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153869 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-nb\") pod \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\" (UID: \"99ae4906-9b02-471f-ab42-aa4dc6ba017d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153927 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg9f6\" (UniqueName: \"kubernetes.io/projected/b751d852-bc02-41e5-9937-8590d330b58d-kube-api-access-zg9f6\") pod \"b751d852-bc02-41e5-9937-8590d330b58d\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153952 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c5fc14d-062b-4392-97e9-6125fb9b281a-horizon-secret-key\") pod \"1c5fc14d-062b-4392-97e9-6125fb9b281a\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153978 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b751d852-bc02-41e5-9937-8590d330b58d-horizon-secret-key\") pod \"b751d852-bc02-41e5-9937-8590d330b58d\" (UID: \"b751d852-bc02-41e5-9937-8590d330b58d\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.154015 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcstn\" (UniqueName: \"kubernetes.io/projected/1c5fc14d-062b-4392-97e9-6125fb9b281a-kube-api-access-jcstn\") pod \"1c5fc14d-062b-4392-97e9-6125fb9b281a\" (UID: \"1c5fc14d-062b-4392-97e9-6125fb9b281a\") " Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.153889 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b751d852-bc02-41e5-9937-8590d330b58d-logs" (OuterVolumeSpecName: "logs") pod "b751d852-bc02-41e5-9937-8590d330b58d" (UID: "b751d852-bc02-41e5-9937-8590d330b58d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.154789 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-config-data" (OuterVolumeSpecName: "config-data") pod "1c5fc14d-062b-4392-97e9-6125fb9b281a" (UID: "1c5fc14d-062b-4392-97e9-6125fb9b281a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.155611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-scripts" (OuterVolumeSpecName: "scripts") pod "b751d852-bc02-41e5-9937-8590d330b58d" (UID: "b751d852-bc02-41e5-9937-8590d330b58d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.155681 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-scripts" (OuterVolumeSpecName: "scripts") pod "1c5fc14d-062b-4392-97e9-6125fb9b281a" (UID: "1c5fc14d-062b-4392-97e9-6125fb9b281a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.155828 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-config-data" (OuterVolumeSpecName: "config-data") pod "b751d852-bc02-41e5-9937-8590d330b58d" (UID: "b751d852-bc02-41e5-9937-8590d330b58d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.156003 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5fc14d-062b-4392-97e9-6125fb9b281a-logs" (OuterVolumeSpecName: "logs") pod "1c5fc14d-062b-4392-97e9-6125fb9b281a" (UID: "1c5fc14d-062b-4392-97e9-6125fb9b281a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.161480 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b751d852-bc02-41e5-9937-8590d330b58d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b751d852-bc02-41e5-9937-8590d330b58d" (UID: "b751d852-bc02-41e5-9937-8590d330b58d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.166943 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5fc14d-062b-4392-97e9-6125fb9b281a-kube-api-access-jcstn" (OuterVolumeSpecName: "kube-api-access-jcstn") pod "1c5fc14d-062b-4392-97e9-6125fb9b281a" (UID: "1c5fc14d-062b-4392-97e9-6125fb9b281a"). InnerVolumeSpecName "kube-api-access-jcstn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.167067 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ae4906-9b02-471f-ab42-aa4dc6ba017d-kube-api-access-6mf8t" (OuterVolumeSpecName: "kube-api-access-6mf8t") pod "99ae4906-9b02-471f-ab42-aa4dc6ba017d" (UID: "99ae4906-9b02-471f-ab42-aa4dc6ba017d"). InnerVolumeSpecName "kube-api-access-6mf8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.168406 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b751d852-bc02-41e5-9937-8590d330b58d-kube-api-access-zg9f6" (OuterVolumeSpecName: "kube-api-access-zg9f6") pod "b751d852-bc02-41e5-9937-8590d330b58d" (UID: "b751d852-bc02-41e5-9937-8590d330b58d"). InnerVolumeSpecName "kube-api-access-zg9f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.169306 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5fc14d-062b-4392-97e9-6125fb9b281a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1c5fc14d-062b-4392-97e9-6125fb9b281a" (UID: "1c5fc14d-062b-4392-97e9-6125fb9b281a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.201326 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99ae4906-9b02-471f-ab42-aa4dc6ba017d" (UID: "99ae4906-9b02-471f-ab42-aa4dc6ba017d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.222702 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-config" (OuterVolumeSpecName: "config") pod "99ae4906-9b02-471f-ab42-aa4dc6ba017d" (UID: "99ae4906-9b02-471f-ab42-aa4dc6ba017d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.222948 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99ae4906-9b02-471f-ab42-aa4dc6ba017d" (UID: "99ae4906-9b02-471f-ab42-aa4dc6ba017d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.223060 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "99ae4906-9b02-471f-ab42-aa4dc6ba017d" (UID: "99ae4906-9b02-471f-ab42-aa4dc6ba017d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.225757 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99ae4906-9b02-471f-ab42-aa4dc6ba017d" (UID: "99ae4906-9b02-471f-ab42-aa4dc6ba017d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255735 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg9f6\" (UniqueName: \"kubernetes.io/projected/b751d852-bc02-41e5-9937-8590d330b58d-kube-api-access-zg9f6\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255768 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c5fc14d-062b-4392-97e9-6125fb9b281a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255777 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b751d852-bc02-41e5-9937-8590d330b58d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255788 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcstn\" (UniqueName: \"kubernetes.io/projected/1c5fc14d-062b-4392-97e9-6125fb9b281a-kube-api-access-jcstn\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255796 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b751d852-bc02-41e5-9937-8590d330b58d-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255805 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255813 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255820 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b751d852-bc02-41e5-9937-8590d330b58d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255828 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255835 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c5fc14d-062b-4392-97e9-6125fb9b281a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255843 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255851 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255860 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255869 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mf8t\" (UniqueName: \"kubernetes.io/projected/99ae4906-9b02-471f-ab42-aa4dc6ba017d-kube-api-access-6mf8t\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255876 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fc14d-062b-4392-97e9-6125fb9b281a-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.255884 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99ae4906-9b02-471f-ab42-aa4dc6ba017d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.565639 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" event={"ID":"99ae4906-9b02-471f-ab42-aa4dc6ba017d","Type":"ContainerDied","Data":"afd503628a30ba76d4305c05c6942d00cf65d47eb14e7d66f0cbc1e81f046920"} Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.565714 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.565933 4718 scope.go:117] "RemoveContainer" containerID="2627492bac546c54242b430a1591f037988c63cfad20c9d768248ff487cdd295" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.567280 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c565788f-fjphh" event={"ID":"1c5fc14d-062b-4392-97e9-6125fb9b281a","Type":"ContainerDied","Data":"0af420eab7b9f2c90c8e111b6abf921ebc1a035988a11e3ea63b396e63f8de77"} Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.567306 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c565788f-fjphh" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.570317 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-868dfc8f69-dxnq8" event={"ID":"b751d852-bc02-41e5-9937-8590d330b58d","Type":"ContainerDied","Data":"37a2392fae1780b0d914fd2cad1e9824ae619d9afba3223ee8fc97acea27cd41"} Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.570375 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868dfc8f69-dxnq8" Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.640131 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c565788f-fjphh"] Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.655119 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69c565788f-fjphh"] Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.672639 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-868dfc8f69-dxnq8"] Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.679720 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-868dfc8f69-dxnq8"] Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.686961 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-vsjz5"] Nov 23 15:02:15 crc kubenswrapper[4718]: I1123 15:02:15.693657 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-vsjz5"] Nov 23 15:02:16 crc kubenswrapper[4718]: E1123 15:02:16.182081 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 23 15:02:16 crc kubenswrapper[4718]: E1123 15:02:16.182521 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnclm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-cfbmg_openstack(2de4e428-ba8b-43d2-893d-e6f020997e5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:02:16 crc kubenswrapper[4718]: E1123 15:02:16.184501 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-cfbmg" podUID="2de4e428-ba8b-43d2-893d-e6f020997e5b" Nov 23 15:02:16 crc kubenswrapper[4718]: I1123 15:02:16.359230 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-vsjz5" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: i/o timeout" Nov 23 15:02:16 crc kubenswrapper[4718]: I1123 15:02:16.453519 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5fc14d-062b-4392-97e9-6125fb9b281a" path="/var/lib/kubelet/pods/1c5fc14d-062b-4392-97e9-6125fb9b281a/volumes" Nov 23 15:02:16 crc kubenswrapper[4718]: I1123 15:02:16.453897 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" path="/var/lib/kubelet/pods/99ae4906-9b02-471f-ab42-aa4dc6ba017d/volumes" Nov 23 15:02:16 crc kubenswrapper[4718]: I1123 15:02:16.454530 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b751d852-bc02-41e5-9937-8590d330b58d" path="/var/lib/kubelet/pods/b751d852-bc02-41e5-9937-8590d330b58d/volumes" Nov 23 15:02:16 crc kubenswrapper[4718]: I1123 15:02:16.493797 4718 scope.go:117] "RemoveContainer" containerID="114ebe8142a12147bb58d0ef0988ab3edaf86bbe0f88b002b1ee8ac6874a88b0" Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:16.632285 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5696759568-pxlzs"] Nov 23 15:02:17 crc kubenswrapper[4718]: E1123 15:02:16.646990 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-cfbmg" podUID="2de4e428-ba8b-43d2-893d-e6f020997e5b" Nov 23 15:02:17 crc kubenswrapper[4718]: W1123 15:02:16.647519 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dba5adf_299f_404c_91f9_c5848e9babe4.slice/crio-459928a47f6fdd1add46415725df9b5ca31f042b6679472b21e0a03f7b653f00 WatchSource:0}: Error finding container 459928a47f6fdd1add46415725df9b5ca31f042b6679472b21e0a03f7b653f00: Status 404 returned error can't find the container with id 459928a47f6fdd1add46415725df9b5ca31f042b6679472b21e0a03f7b653f00 Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:16.693957 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65c9478d8d-nxsfl"] Nov 23 15:02:17 crc kubenswrapper[4718]: W1123 15:02:16.717594 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb73efe_fe18_4507_b723_d3dbf1d8ed91.slice/crio-6044a21011229dc374cc313c73481092df278747167760d5a9e1995980693de7 WatchSource:0}: Error finding container 6044a21011229dc374cc313c73481092df278747167760d5a9e1995980693de7: Status 404 returned error can't find the container with id 6044a21011229dc374cc313c73481092df278747167760d5a9e1995980693de7 Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:16.785727 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:02:17 crc kubenswrapper[4718]: W1123 15:02:16.802892 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba72af14_c7bb_49fd_8838_97542eee727f.slice/crio-8a428f6c965c6361de8abf38cce58fbe2e0b0cbc9eb75a7eec44566ad09fae05 WatchSource:0}: Error finding container 8a428f6c965c6361de8abf38cce58fbe2e0b0cbc9eb75a7eec44566ad09fae05: Status 404 returned error can't find the container with id 8a428f6c965c6361de8abf38cce58fbe2e0b0cbc9eb75a7eec44566ad09fae05 Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.664766 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerStarted","Data":"0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.669328 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65c9478d8d-nxsfl" event={"ID":"ebb73efe-fe18-4507-b723-d3dbf1d8ed91","Type":"ContainerStarted","Data":"46df946bed7b905e34865e338cb747c8fd3526752b5035d2987981f1b55faaa7"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.669379 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65c9478d8d-nxsfl" event={"ID":"ebb73efe-fe18-4507-b723-d3dbf1d8ed91","Type":"ContainerStarted","Data":"f9478983914fb28c51dec4d3b441138969a727066f534d992790434c5caf9a76"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.669392 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65c9478d8d-nxsfl" event={"ID":"ebb73efe-fe18-4507-b723-d3dbf1d8ed91","Type":"ContainerStarted","Data":"6044a21011229dc374cc313c73481092df278747167760d5a9e1995980693de7"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.674626 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5447c9669f-r2wq4" event={"ID":"ffcd0872-12a2-4dc9-bfde-22681ed5212f","Type":"ContainerStarted","Data":"3e5fef0454fdbdb80fb1134eea5cb6a39c41bb7e0a2ed6a2be57c55dcf37dc38"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.674670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5447c9669f-r2wq4" event={"ID":"ffcd0872-12a2-4dc9-bfde-22681ed5212f","Type":"ContainerStarted","Data":"c3f67d4074e98f6e445fe9d4b9b94ec84d6cde963c168517150840f12ada5799"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.674810 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5447c9669f-r2wq4" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon-log" containerID="cri-o://c3f67d4074e98f6e445fe9d4b9b94ec84d6cde963c168517150840f12ada5799" gracePeriod=30 Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.675122 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5447c9669f-r2wq4" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon" containerID="cri-o://3e5fef0454fdbdb80fb1134eea5cb6a39c41bb7e0a2ed6a2be57c55dcf37dc38" gracePeriod=30 Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.683479 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5696759568-pxlzs" event={"ID":"5dba5adf-299f-404c-91f9-c5848e9babe4","Type":"ContainerStarted","Data":"b33354d724c85f5c24909dfdf0de020feb9b6de40d0a896f864c2518ec57bb14"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.683523 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5696759568-pxlzs" event={"ID":"5dba5adf-299f-404c-91f9-c5848e9babe4","Type":"ContainerStarted","Data":"5060536a7c2e7d7ff42bbf0da0aaf1f0ffb07148047ba73cb4ff2fae01e7acc2"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.683535 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5696759568-pxlzs" event={"ID":"5dba5adf-299f-404c-91f9-c5848e9babe4","Type":"ContainerStarted","Data":"459928a47f6fdd1add46415725df9b5ca31f042b6679472b21e0a03f7b653f00"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.698798 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba72af14-c7bb-49fd-8838-97542eee727f","Type":"ContainerStarted","Data":"35c05670ecfdbf955de85ef67100250f0bacacf357a6cac91d1ce5c9c4b0c2e6"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.698838 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba72af14-c7bb-49fd-8838-97542eee727f","Type":"ContainerStarted","Data":"8a428f6c965c6361de8abf38cce58fbe2e0b0cbc9eb75a7eec44566ad09fae05"} Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.699210 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65c9478d8d-nxsfl" podStartSLOduration=23.699197641 podStartE2EDuration="23.699197641s" podCreationTimestamp="2025-11-23 15:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:17.692879206 +0000 UTC m=+988.932499070" watchObservedRunningTime="2025-11-23 15:02:17.699197641 +0000 UTC m=+988.938817485" Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.716180 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.737859 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5447c9669f-r2wq4" podStartSLOduration=3.902638576 podStartE2EDuration="30.737841236s" podCreationTimestamp="2025-11-23 15:01:47 +0000 UTC" firstStartedPulling="2025-11-23 15:01:48.168329764 +0000 UTC m=+959.407949608" lastFinishedPulling="2025-11-23 15:02:15.003532424 +0000 UTC m=+986.243152268" observedRunningTime="2025-11-23 15:02:17.735285114 +0000 UTC m=+988.974904958" watchObservedRunningTime="2025-11-23 15:02:17.737841236 +0000 UTC m=+988.977461080" Nov 23 15:02:17 crc kubenswrapper[4718]: I1123 15:02:17.741416 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5696759568-pxlzs" podStartSLOduration=24.741396974 podStartE2EDuration="24.741396974s" podCreationTimestamp="2025-11-23 15:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:17.711511723 +0000 UTC m=+988.951131577" watchObservedRunningTime="2025-11-23 15:02:17.741396974 +0000 UTC m=+988.981016818" Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.131364 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ztlxn"] Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.148380 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j62px"] Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.277822 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.712505 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3","Type":"ContainerStarted","Data":"8d0b8a2e2aa15f19e86cbb19a70888c206b9c91446216e0ef117b61022d0ae02"} Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.714772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba72af14-c7bb-49fd-8838-97542eee727f","Type":"ContainerStarted","Data":"776f8182e899398ff9b43af662f1f1640d70a3ca20e76d67d0ceeac73fe09ce4"} Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.714915 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-httpd" containerID="cri-o://776f8182e899398ff9b43af662f1f1640d70a3ca20e76d67d0ceeac73fe09ce4" gracePeriod=30 Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.715057 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-log" containerID="cri-o://35c05670ecfdbf955de85ef67100250f0bacacf357a6cac91d1ce5c9c4b0c2e6" gracePeriod=30 Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.716266 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j62px" event={"ID":"ef41e996-1748-4b67-ad0c-1b7481290391","Type":"ContainerStarted","Data":"ed81e056fd11a5ce9a49819947d9825994f78d6079a38b57c873adeac7bce6f7"} Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.720921 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" event={"ID":"28f71e41-ae1d-43b8-a822-c6a56b5e8119","Type":"ContainerStarted","Data":"6020b9b585aece875223f4bab3dee5447f27cdb7903030d05549d4860d545f6c"} Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.720977 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" event={"ID":"28f71e41-ae1d-43b8-a822-c6a56b5e8119","Type":"ContainerStarted","Data":"c5942fd04c9dfd680ae7bb779538cb1d7a1af5beb26142d749e4c4ce10af1db9"} Nov 23 15:02:18 crc kubenswrapper[4718]: I1123 15:02:18.738289 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.738272286 podStartE2EDuration="22.738272286s" podCreationTimestamp="2025-11-23 15:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:18.731630281 +0000 UTC m=+989.971250135" watchObservedRunningTime="2025-11-23 15:02:18.738272286 +0000 UTC m=+989.977892130" Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.730048 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j62px" event={"ID":"ef41e996-1748-4b67-ad0c-1b7481290391","Type":"ContainerStarted","Data":"9e6513797c7d6343ea7615bc111ffe98a5a5d8b3de0602f1552bcc1be5d7b63f"} Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.732181 4718 generic.go:334] "Generic (PLEG): container finished" podID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerID="6020b9b585aece875223f4bab3dee5447f27cdb7903030d05549d4860d545f6c" exitCode=0 Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.732290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" event={"ID":"28f71e41-ae1d-43b8-a822-c6a56b5e8119","Type":"ContainerDied","Data":"6020b9b585aece875223f4bab3dee5447f27cdb7903030d05549d4860d545f6c"} Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.737004 4718 generic.go:334] "Generic (PLEG): container finished" podID="ba72af14-c7bb-49fd-8838-97542eee727f" containerID="776f8182e899398ff9b43af662f1f1640d70a3ca20e76d67d0ceeac73fe09ce4" exitCode=0 Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.737033 4718 generic.go:334] "Generic (PLEG): container finished" podID="ba72af14-c7bb-49fd-8838-97542eee727f" containerID="35c05670ecfdbf955de85ef67100250f0bacacf357a6cac91d1ce5c9c4b0c2e6" exitCode=143 Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.737058 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba72af14-c7bb-49fd-8838-97542eee727f","Type":"ContainerDied","Data":"776f8182e899398ff9b43af662f1f1640d70a3ca20e76d67d0ceeac73fe09ce4"} Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.737082 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba72af14-c7bb-49fd-8838-97542eee727f","Type":"ContainerDied","Data":"35c05670ecfdbf955de85ef67100250f0bacacf357a6cac91d1ce5c9c4b0c2e6"} Nov 23 15:02:19 crc kubenswrapper[4718]: I1123 15:02:19.771993 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j62px" podStartSLOduration=10.771949771 podStartE2EDuration="10.771949771s" podCreationTimestamp="2025-11-23 15:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:19.771049757 +0000 UTC m=+991.010669601" watchObservedRunningTime="2025-11-23 15:02:19.771949771 +0000 UTC m=+991.011569635" Nov 23 15:02:24 crc kubenswrapper[4718]: I1123 15:02:24.351724 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:02:24 crc kubenswrapper[4718]: I1123 15:02:24.352495 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:02:24 crc kubenswrapper[4718]: I1123 15:02:24.431044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:02:24 crc kubenswrapper[4718]: I1123 15:02:24.431563 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:02:27 crc kubenswrapper[4718]: I1123 15:02:27.527299 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:27 crc kubenswrapper[4718]: I1123 15:02:27.529003 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:32 crc kubenswrapper[4718]: I1123 15:02:32.853266 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3","Type":"ContainerStarted","Data":"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14"} Nov 23 15:02:34 crc kubenswrapper[4718]: I1123 15:02:34.352873 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 23 15:02:34 crc kubenswrapper[4718]: I1123 15:02:34.432072 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65c9478d8d-nxsfl" podUID="ebb73efe-fe18-4507-b723-d3dbf1d8ed91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Nov 23 15:02:35 crc kubenswrapper[4718]: I1123 15:02:35.879212 4718 generic.go:334] "Generic (PLEG): container finished" podID="ef41e996-1748-4b67-ad0c-1b7481290391" containerID="9e6513797c7d6343ea7615bc111ffe98a5a5d8b3de0602f1552bcc1be5d7b63f" exitCode=0 Nov 23 15:02:35 crc kubenswrapper[4718]: I1123 15:02:35.879743 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j62px" event={"ID":"ef41e996-1748-4b67-ad0c-1b7481290391","Type":"ContainerDied","Data":"9e6513797c7d6343ea7615bc111ffe98a5a5d8b3de0602f1552bcc1be5d7b63f"} Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.127355 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.298842 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ba72af14-c7bb-49fd-8838-97542eee727f\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.298925 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-httpd-run\") pod \"ba72af14-c7bb-49fd-8838-97542eee727f\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.298980 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-logs\") pod \"ba72af14-c7bb-49fd-8838-97542eee727f\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.299126 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-config-data\") pod \"ba72af14-c7bb-49fd-8838-97542eee727f\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.299167 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4kfn\" (UniqueName: \"kubernetes.io/projected/ba72af14-c7bb-49fd-8838-97542eee727f-kube-api-access-d4kfn\") pod \"ba72af14-c7bb-49fd-8838-97542eee727f\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.299191 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-combined-ca-bundle\") pod \"ba72af14-c7bb-49fd-8838-97542eee727f\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.299228 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-scripts\") pod \"ba72af14-c7bb-49fd-8838-97542eee727f\" (UID: \"ba72af14-c7bb-49fd-8838-97542eee727f\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.299662 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-logs" (OuterVolumeSpecName: "logs") pod "ba72af14-c7bb-49fd-8838-97542eee727f" (UID: "ba72af14-c7bb-49fd-8838-97542eee727f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.300610 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ba72af14-c7bb-49fd-8838-97542eee727f" (UID: "ba72af14-c7bb-49fd-8838-97542eee727f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.305935 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "ba72af14-c7bb-49fd-8838-97542eee727f" (UID: "ba72af14-c7bb-49fd-8838-97542eee727f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.307725 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba72af14-c7bb-49fd-8838-97542eee727f-kube-api-access-d4kfn" (OuterVolumeSpecName: "kube-api-access-d4kfn") pod "ba72af14-c7bb-49fd-8838-97542eee727f" (UID: "ba72af14-c7bb-49fd-8838-97542eee727f"). InnerVolumeSpecName "kube-api-access-d4kfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.312868 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-scripts" (OuterVolumeSpecName: "scripts") pod "ba72af14-c7bb-49fd-8838-97542eee727f" (UID: "ba72af14-c7bb-49fd-8838-97542eee727f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.340485 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba72af14-c7bb-49fd-8838-97542eee727f" (UID: "ba72af14-c7bb-49fd-8838-97542eee727f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.367757 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-config-data" (OuterVolumeSpecName: "config-data") pod "ba72af14-c7bb-49fd-8838-97542eee727f" (UID: "ba72af14-c7bb-49fd-8838-97542eee727f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.403706 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.403736 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.403747 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba72af14-c7bb-49fd-8838-97542eee727f-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.403755 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.403767 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4kfn\" (UniqueName: \"kubernetes.io/projected/ba72af14-c7bb-49fd-8838-97542eee727f-kube-api-access-d4kfn\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.403779 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.403787 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba72af14-c7bb-49fd-8838-97542eee727f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.432898 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.442728 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.506425 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.607647 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-fernet-keys\") pod \"ef41e996-1748-4b67-ad0c-1b7481290391\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.607868 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-credential-keys\") pod \"ef41e996-1748-4b67-ad0c-1b7481290391\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.607938 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2ldd\" (UniqueName: \"kubernetes.io/projected/ef41e996-1748-4b67-ad0c-1b7481290391-kube-api-access-g2ldd\") pod \"ef41e996-1748-4b67-ad0c-1b7481290391\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.608053 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-scripts\") pod \"ef41e996-1748-4b67-ad0c-1b7481290391\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.608262 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-config-data\") pod \"ef41e996-1748-4b67-ad0c-1b7481290391\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.609163 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-combined-ca-bundle\") pod \"ef41e996-1748-4b67-ad0c-1b7481290391\" (UID: \"ef41e996-1748-4b67-ad0c-1b7481290391\") " Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.613296 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-scripts" (OuterVolumeSpecName: "scripts") pod "ef41e996-1748-4b67-ad0c-1b7481290391" (UID: "ef41e996-1748-4b67-ad0c-1b7481290391"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.614079 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ef41e996-1748-4b67-ad0c-1b7481290391" (UID: "ef41e996-1748-4b67-ad0c-1b7481290391"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.614613 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef41e996-1748-4b67-ad0c-1b7481290391-kube-api-access-g2ldd" (OuterVolumeSpecName: "kube-api-access-g2ldd") pod "ef41e996-1748-4b67-ad0c-1b7481290391" (UID: "ef41e996-1748-4b67-ad0c-1b7481290391"). InnerVolumeSpecName "kube-api-access-g2ldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.628205 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ef41e996-1748-4b67-ad0c-1b7481290391" (UID: "ef41e996-1748-4b67-ad0c-1b7481290391"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.640196 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef41e996-1748-4b67-ad0c-1b7481290391" (UID: "ef41e996-1748-4b67-ad0c-1b7481290391"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.655543 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-config-data" (OuterVolumeSpecName: "config-data") pod "ef41e996-1748-4b67-ad0c-1b7481290391" (UID: "ef41e996-1748-4b67-ad0c-1b7481290391"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.711595 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.711852 4718 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.711959 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2ldd\" (UniqueName: \"kubernetes.io/projected/ef41e996-1748-4b67-ad0c-1b7481290391-kube-api-access-g2ldd\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.712063 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.712146 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.712222 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef41e996-1748-4b67-ad0c-1b7481290391-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.923986 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.924396 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba72af14-c7bb-49fd-8838-97542eee727f","Type":"ContainerDied","Data":"8a428f6c965c6361de8abf38cce58fbe2e0b0cbc9eb75a7eec44566ad09fae05"} Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.924484 4718 scope.go:117] "RemoveContainer" containerID="776f8182e899398ff9b43af662f1f1640d70a3ca20e76d67d0ceeac73fe09ce4" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.940046 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j62px" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.945128 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j62px" event={"ID":"ef41e996-1748-4b67-ad0c-1b7481290391","Type":"ContainerDied","Data":"ed81e056fd11a5ce9a49819947d9825994f78d6079a38b57c873adeac7bce6f7"} Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.946639 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed81e056fd11a5ce9a49819947d9825994f78d6079a38b57c873adeac7bce6f7" Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.978451 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:02:38 crc kubenswrapper[4718]: I1123 15:02:38.985422 4718 scope.go:117] "RemoveContainer" containerID="35c05670ecfdbf955de85ef67100250f0bacacf357a6cac91d1ce5c9c4b0c2e6" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.008528 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" event={"ID":"28f71e41-ae1d-43b8-a822-c6a56b5e8119","Type":"ContainerStarted","Data":"f77f96e3ff0fa2bc124bbe3084a2d4bf537ba0b3cb13639ceb07450e0e14bf38"} Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.009073 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.061612 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.071483 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:02:39 crc kubenswrapper[4718]: E1123 15:02:39.071992 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="init" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072010 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="init" Nov 23 15:02:39 crc kubenswrapper[4718]: E1123 15:02:39.072223 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef41e996-1748-4b67-ad0c-1b7481290391" containerName="keystone-bootstrap" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072230 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef41e996-1748-4b67-ad0c-1b7481290391" containerName="keystone-bootstrap" Nov 23 15:02:39 crc kubenswrapper[4718]: E1123 15:02:39.072239 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-httpd" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072245 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-httpd" Nov 23 15:02:39 crc kubenswrapper[4718]: E1123 15:02:39.072259 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072264 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" Nov 23 15:02:39 crc kubenswrapper[4718]: E1123 15:02:39.072274 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-log" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072280 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-log" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072601 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ae4906-9b02-471f-ab42-aa4dc6ba017d" containerName="dnsmasq-dns" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072627 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-httpd" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072640 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" containerName="glance-log" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.072651 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef41e996-1748-4b67-ad0c-1b7481290391" containerName="keystone-bootstrap" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.075004 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.077040 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.077698 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.091541 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" podStartSLOduration=43.09152091 podStartE2EDuration="43.09152091s" podCreationTimestamp="2025-11-23 15:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:39.05880827 +0000 UTC m=+1010.298428114" watchObservedRunningTime="2025-11-23 15:02:39.09152091 +0000 UTC m=+1010.331140754" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.092066 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243312 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243621 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243672 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243715 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243732 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243768 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.243788 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdst9\" (UniqueName: \"kubernetes.io/projected/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-kube-api-access-sdst9\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347068 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347114 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347155 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347177 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdst9\" (UniqueName: \"kubernetes.io/projected/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-kube-api-access-sdst9\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347220 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347241 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347262 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347308 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.347608 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.348385 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.348520 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.366729 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdst9\" (UniqueName: \"kubernetes.io/projected/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-kube-api-access-sdst9\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.367534 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.368646 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.369812 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.372078 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.394164 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.409409 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.564200 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-844cdbd5f8-ptmjk"] Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.565356 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.576653 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.576875 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.577027 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.577360 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xknxt" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.577427 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.577499 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.612704 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-844cdbd5f8-ptmjk"] Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.664825 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-public-tls-certs\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.665111 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpdm5\" (UniqueName: \"kubernetes.io/projected/e7d1c35f-1d40-4385-9579-dc7477cc104d-kube-api-access-mpdm5\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.665129 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-combined-ca-bundle\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.665154 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-config-data\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.665171 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-scripts\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.665185 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-fernet-keys\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.665238 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-internal-tls-certs\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.665286 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-credential-keys\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767381 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-scripts\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767429 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-config-data\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767482 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-fernet-keys\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767556 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-internal-tls-certs\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767609 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-credential-keys\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767714 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-public-tls-certs\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpdm5\" (UniqueName: \"kubernetes.io/projected/e7d1c35f-1d40-4385-9579-dc7477cc104d-kube-api-access-mpdm5\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.767791 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-combined-ca-bundle\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.779247 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-credential-keys\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.779747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-scripts\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.782946 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-public-tls-certs\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.784043 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-internal-tls-certs\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.785078 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-fernet-keys\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.790067 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-config-data\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.792950 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d1c35f-1d40-4385-9579-dc7477cc104d-combined-ca-bundle\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.798897 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpdm5\" (UniqueName: \"kubernetes.io/projected/e7d1c35f-1d40-4385-9579-dc7477cc104d-kube-api-access-mpdm5\") pod \"keystone-844cdbd5f8-ptmjk\" (UID: \"e7d1c35f-1d40-4385-9579-dc7477cc104d\") " pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:39 crc kubenswrapper[4718]: I1123 15:02:39.890640 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.028654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2m7cd" event={"ID":"88793089-cfde-48d5-8670-880344ab6711","Type":"ContainerStarted","Data":"1bd9458474636a3f7139d8fa351f9f2a7afb035ed2f0149215e938e465c014c8"} Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.047399 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerStarted","Data":"a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2"} Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.062996 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2m7cd" podStartSLOduration=3.221857299 podStartE2EDuration="55.062980835s" podCreationTimestamp="2025-11-23 15:01:45 +0000 UTC" firstStartedPulling="2025-11-23 15:01:47.054765935 +0000 UTC m=+958.294385779" lastFinishedPulling="2025-11-23 15:02:38.895889471 +0000 UTC m=+1010.135509315" observedRunningTime="2025-11-23 15:02:40.055727833 +0000 UTC m=+1011.295347667" watchObservedRunningTime="2025-11-23 15:02:40.062980835 +0000 UTC m=+1011.302600679" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.064486 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3","Type":"ContainerStarted","Data":"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9"} Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.064617 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-log" containerID="cri-o://a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14" gracePeriod=30 Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.065092 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-httpd" containerID="cri-o://26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9" gracePeriod=30 Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.088962 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2rx99" event={"ID":"2456d901-f349-4f7a-b9cc-63c9eba428c8","Type":"ContainerStarted","Data":"36fc12c4c4c04f3872dbd7dddd3dcf4c463fcaac134b0fceaab5eb6f76e13592"} Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.112025 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=44.111985428 podStartE2EDuration="44.111985428s" podCreationTimestamp="2025-11-23 15:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:40.086455087 +0000 UTC m=+1011.326074931" watchObservedRunningTime="2025-11-23 15:02:40.111985428 +0000 UTC m=+1011.351605272" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.132736 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2rx99" podStartSLOduration=3.167622431 podStartE2EDuration="55.132707643s" podCreationTimestamp="2025-11-23 15:01:45 +0000 UTC" firstStartedPulling="2025-11-23 15:01:46.953380217 +0000 UTC m=+958.193000061" lastFinishedPulling="2025-11-23 15:02:38.918465429 +0000 UTC m=+1010.158085273" observedRunningTime="2025-11-23 15:02:40.107565004 +0000 UTC m=+1011.347184858" watchObservedRunningTime="2025-11-23 15:02:40.132707643 +0000 UTC m=+1011.372327487" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.161463 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.403757 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-844cdbd5f8-ptmjk"] Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.471816 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba72af14-c7bb-49fd-8838-97542eee727f" path="/var/lib/kubelet/pods/ba72af14-c7bb-49fd-8838-97542eee727f/volumes" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.677395 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.783570 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-config-data\") pod \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.783668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frn68\" (UniqueName: \"kubernetes.io/projected/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-kube-api-access-frn68\") pod \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.783774 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-logs\") pod \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.783811 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.783882 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-httpd-run\") pod \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.783934 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-scripts\") pod \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.783975 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-combined-ca-bundle\") pod \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\" (UID: \"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3\") " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.786092 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-logs" (OuterVolumeSpecName: "logs") pod "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" (UID: "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.786422 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" (UID: "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.793542 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" (UID: "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.793847 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-scripts" (OuterVolumeSpecName: "scripts") pod "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" (UID: "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.794287 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-kube-api-access-frn68" (OuterVolumeSpecName: "kube-api-access-frn68") pod "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" (UID: "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3"). InnerVolumeSpecName "kube-api-access-frn68". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.817420 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" (UID: "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.855516 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-config-data" (OuterVolumeSpecName: "config-data") pod "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" (UID: "049b6ed4-998a-4ed4-89c1-5fa7832ebaa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.886321 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.886355 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.886366 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.886378 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frn68\" (UniqueName: \"kubernetes.io/projected/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-kube-api-access-frn68\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.886389 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.886420 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.886430 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.904114 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 23 15:02:40 crc kubenswrapper[4718]: I1123 15:02:40.991935 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.117036 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cfbmg" event={"ID":"2de4e428-ba8b-43d2-893d-e6f020997e5b","Type":"ContainerStarted","Data":"8a49a7896dc8a92f8b9eebd99cf02527effd8db093472fdc6f4b6e03f796fc95"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.125659 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ab8b114-a092-4eb7-a379-fe5c259f1fe9","Type":"ContainerStarted","Data":"c1aef4a1eaac963c42a4b7c228b213bbd1ac43b742d2cfab6a9c91930bb920ab"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.126241 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ab8b114-a092-4eb7-a379-fe5c259f1fe9","Type":"ContainerStarted","Data":"e39ecd6bf971c74a6d459b21a597ddb045a821e15cf97759de649e4ea7e1b0cb"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.134545 4718 generic.go:334] "Generic (PLEG): container finished" podID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerID="26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9" exitCode=0 Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.134580 4718 generic.go:334] "Generic (PLEG): container finished" podID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerID="a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14" exitCode=143 Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.134619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3","Type":"ContainerDied","Data":"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.134644 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3","Type":"ContainerDied","Data":"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.134653 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"049b6ed4-998a-4ed4-89c1-5fa7832ebaa3","Type":"ContainerDied","Data":"8d0b8a2e2aa15f19e86cbb19a70888c206b9c91446216e0ef117b61022d0ae02"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.134668 4718 scope.go:117] "RemoveContainer" containerID="26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.134813 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.157356 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-844cdbd5f8-ptmjk" event={"ID":"e7d1c35f-1d40-4385-9579-dc7477cc104d","Type":"ContainerStarted","Data":"af649839657a217f1a300293e32440bf725c033bd01c47bc63e772f1107853a7"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.157411 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-844cdbd5f8-ptmjk" event={"ID":"e7d1c35f-1d40-4385-9579-dc7477cc104d","Type":"ContainerStarted","Data":"ec501f06dd603ce8c2e310a8928fb4c242b22c738e52c38373a4c5ff29a5702e"} Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.157497 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.165630 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-cfbmg" podStartSLOduration=3.9541535039999998 podStartE2EDuration="56.165613757s" podCreationTimestamp="2025-11-23 15:01:45 +0000 UTC" firstStartedPulling="2025-11-23 15:01:46.707663584 +0000 UTC m=+957.947283428" lastFinishedPulling="2025-11-23 15:02:38.919123837 +0000 UTC m=+1010.158743681" observedRunningTime="2025-11-23 15:02:41.146878327 +0000 UTC m=+1012.386498181" watchObservedRunningTime="2025-11-23 15:02:41.165613757 +0000 UTC m=+1012.405233601" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.190866 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-844cdbd5f8-ptmjk" podStartSLOduration=2.190847409 podStartE2EDuration="2.190847409s" podCreationTimestamp="2025-11-23 15:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:41.17508132 +0000 UTC m=+1012.414701164" watchObservedRunningTime="2025-11-23 15:02:41.190847409 +0000 UTC m=+1012.430467253" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.207939 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.212875 4718 scope.go:117] "RemoveContainer" containerID="a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.217475 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.225403 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:02:41 crc kubenswrapper[4718]: E1123 15:02:41.226392 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-httpd" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.226410 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-httpd" Nov 23 15:02:41 crc kubenswrapper[4718]: E1123 15:02:41.226434 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-log" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.226457 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-log" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.226680 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-httpd" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.226696 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" containerName="glance-log" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.227798 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.230569 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.240360 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.246770 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.296970 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.297028 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.297054 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.297239 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.297284 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6vfq\" (UniqueName: \"kubernetes.io/projected/8a622ba5-be24-4cdd-a1eb-850f851ea41a-kube-api-access-t6vfq\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.297525 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.297561 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.297631 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-logs\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.349887 4718 scope.go:117] "RemoveContainer" containerID="26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9" Nov 23 15:02:41 crc kubenswrapper[4718]: E1123 15:02:41.352231 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9\": container with ID starting with 26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9 not found: ID does not exist" containerID="26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.352264 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9"} err="failed to get container status \"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9\": rpc error: code = NotFound desc = could not find container \"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9\": container with ID starting with 26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9 not found: ID does not exist" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.352288 4718 scope.go:117] "RemoveContainer" containerID="a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14" Nov 23 15:02:41 crc kubenswrapper[4718]: E1123 15:02:41.352793 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14\": container with ID starting with a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14 not found: ID does not exist" containerID="a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.352861 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14"} err="failed to get container status \"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14\": rpc error: code = NotFound desc = could not find container \"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14\": container with ID starting with a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14 not found: ID does not exist" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.352883 4718 scope.go:117] "RemoveContainer" containerID="26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.353171 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9"} err="failed to get container status \"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9\": rpc error: code = NotFound desc = could not find container \"26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9\": container with ID starting with 26a5579ee1058c6125b13ac2b9caf720c40b4f5d3d86e8bec56218068b8069b9 not found: ID does not exist" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.353196 4718 scope.go:117] "RemoveContainer" containerID="a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.353563 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14"} err="failed to get container status \"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14\": rpc error: code = NotFound desc = could not find container \"a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14\": container with ID starting with a307cc5cd1e7bdb4848a195e8b52d41167ea4a89a85328d4a13df44d46e6dd14 not found: ID does not exist" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.399634 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.401662 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.401749 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-logs\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.401893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.401998 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.403277 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.403382 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.403420 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6vfq\" (UniqueName: \"kubernetes.io/projected/8a622ba5-be24-4cdd-a1eb-850f851ea41a-kube-api-access-t6vfq\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.402559 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.402065 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.406475 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.407663 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-logs\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.417239 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.418555 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.421466 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6vfq\" (UniqueName: \"kubernetes.io/projected/8a622ba5-be24-4cdd-a1eb-850f851ea41a-kube-api-access-t6vfq\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.422251 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.443868 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:02:41 crc kubenswrapper[4718]: I1123 15:02:41.569128 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:02:42 crc kubenswrapper[4718]: I1123 15:02:42.152389 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:02:42 crc kubenswrapper[4718]: I1123 15:02:42.174576 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ab8b114-a092-4eb7-a379-fe5c259f1fe9","Type":"ContainerStarted","Data":"5bda54d511db35bf484b5e54da4bec5107a0f415da4439587098774398852ae3"} Nov 23 15:02:42 crc kubenswrapper[4718]: I1123 15:02:42.198071 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.198024457 podStartE2EDuration="4.198024457s" podCreationTimestamp="2025-11-23 15:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:42.192920266 +0000 UTC m=+1013.432540120" watchObservedRunningTime="2025-11-23 15:02:42.198024457 +0000 UTC m=+1013.437644301" Nov 23 15:02:42 crc kubenswrapper[4718]: I1123 15:02:42.451651 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049b6ed4-998a-4ed4-89c1-5fa7832ebaa3" path="/var/lib/kubelet/pods/049b6ed4-998a-4ed4-89c1-5fa7832ebaa3/volumes" Nov 23 15:02:43 crc kubenswrapper[4718]: I1123 15:02:43.195807 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a622ba5-be24-4cdd-a1eb-850f851ea41a","Type":"ContainerStarted","Data":"5fd7bde96f49172ceee57457fa3541e01976a1b89a385cecc7711665f83c8c20"} Nov 23 15:02:43 crc kubenswrapper[4718]: I1123 15:02:43.196145 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a622ba5-be24-4cdd-a1eb-850f851ea41a","Type":"ContainerStarted","Data":"c333389cd5bb9061fcf5069dee044ddfda733f741b9edcd45b9026df6dc2cea5"} Nov 23 15:02:44 crc kubenswrapper[4718]: I1123 15:02:44.231979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a622ba5-be24-4cdd-a1eb-850f851ea41a","Type":"ContainerStarted","Data":"84c13bdace4917a8661ee3b99b9a1c2d2e3143c575041751bdf3393cc99e056a"} Nov 23 15:02:44 crc kubenswrapper[4718]: I1123 15:02:44.268545 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.268523505 podStartE2EDuration="3.268523505s" podCreationTimestamp="2025-11-23 15:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:44.265187063 +0000 UTC m=+1015.504806907" watchObservedRunningTime="2025-11-23 15:02:44.268523505 +0000 UTC m=+1015.508143349" Nov 23 15:02:45 crc kubenswrapper[4718]: I1123 15:02:45.245555 4718 generic.go:334] "Generic (PLEG): container finished" podID="88793089-cfde-48d5-8670-880344ab6711" containerID="1bd9458474636a3f7139d8fa351f9f2a7afb035ed2f0149215e938e465c014c8" exitCode=0 Nov 23 15:02:45 crc kubenswrapper[4718]: I1123 15:02:45.246408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2m7cd" event={"ID":"88793089-cfde-48d5-8670-880344ab6711","Type":"ContainerDied","Data":"1bd9458474636a3f7139d8fa351f9f2a7afb035ed2f0149215e938e465c014c8"} Nov 23 15:02:46 crc kubenswrapper[4718]: I1123 15:02:46.218985 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:02:46 crc kubenswrapper[4718]: I1123 15:02:46.353414 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:02:46 crc kubenswrapper[4718]: I1123 15:02:46.496496 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:02:46 crc kubenswrapper[4718]: I1123 15:02:46.576213 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xwqvg"] Nov 23 15:02:46 crc kubenswrapper[4718]: I1123 15:02:46.576545 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-xwqvg" podUID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerName="dnsmasq-dns" containerID="cri-o://a5feb3db2ea81734ba314bc36324d8893ea0499b549fd108c1ec3f0f13b4e9c8" gracePeriod=10 Nov 23 15:02:47 crc kubenswrapper[4718]: I1123 15:02:47.265608 4718 generic.go:334] "Generic (PLEG): container finished" podID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerID="a5feb3db2ea81734ba314bc36324d8893ea0499b549fd108c1ec3f0f13b4e9c8" exitCode=0 Nov 23 15:02:47 crc kubenswrapper[4718]: I1123 15:02:47.265667 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xwqvg" event={"ID":"176e4753-0fcc-464b-b49e-a4b52cf26b5f","Type":"ContainerDied","Data":"a5feb3db2ea81734ba314bc36324d8893ea0499b549fd108c1ec3f0f13b4e9c8"} Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.035154 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-65c9478d8d-nxsfl" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.105628 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5696759568-pxlzs"] Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.106161 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" containerID="cri-o://b33354d724c85f5c24909dfdf0de020feb9b6de40d0a896f864c2518ec57bb14" gracePeriod=30 Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.106646 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon-log" containerID="cri-o://5060536a7c2e7d7ff42bbf0da0aaf1f0ffb07148047ba73cb4ff2fae01e7acc2" gracePeriod=30 Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.112671 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.113022 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2m7cd" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.123764 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:38402->10.217.0.145:8443: read: connection reset by peer" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.160362 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-config-data\") pod \"88793089-cfde-48d5-8670-880344ab6711\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.160474 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfv92\" (UniqueName: \"kubernetes.io/projected/88793089-cfde-48d5-8670-880344ab6711-kube-api-access-wfv92\") pod \"88793089-cfde-48d5-8670-880344ab6711\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.160506 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-combined-ca-bundle\") pod \"88793089-cfde-48d5-8670-880344ab6711\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.160538 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-scripts\") pod \"88793089-cfde-48d5-8670-880344ab6711\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.160640 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88793089-cfde-48d5-8670-880344ab6711-logs\") pod \"88793089-cfde-48d5-8670-880344ab6711\" (UID: \"88793089-cfde-48d5-8670-880344ab6711\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.170380 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-scripts" (OuterVolumeSpecName: "scripts") pod "88793089-cfde-48d5-8670-880344ab6711" (UID: "88793089-cfde-48d5-8670-880344ab6711"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.174911 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88793089-cfde-48d5-8670-880344ab6711-kube-api-access-wfv92" (OuterVolumeSpecName: "kube-api-access-wfv92") pod "88793089-cfde-48d5-8670-880344ab6711" (UID: "88793089-cfde-48d5-8670-880344ab6711"). InnerVolumeSpecName "kube-api-access-wfv92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.183573 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88793089-cfde-48d5-8670-880344ab6711-logs" (OuterVolumeSpecName: "logs") pod "88793089-cfde-48d5-8670-880344ab6711" (UID: "88793089-cfde-48d5-8670-880344ab6711"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.193188 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-config-data" (OuterVolumeSpecName: "config-data") pod "88793089-cfde-48d5-8670-880344ab6711" (UID: "88793089-cfde-48d5-8670-880344ab6711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.221357 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88793089-cfde-48d5-8670-880344ab6711" (UID: "88793089-cfde-48d5-8670-880344ab6711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.262470 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.262508 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88793089-cfde-48d5-8670-880344ab6711-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.262517 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.262641 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfv92\" (UniqueName: \"kubernetes.io/projected/88793089-cfde-48d5-8670-880344ab6711-kube-api-access-wfv92\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.262654 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88793089-cfde-48d5-8670-880344ab6711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.281855 4718 generic.go:334] "Generic (PLEG): container finished" podID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerID="3e5fef0454fdbdb80fb1134eea5cb6a39c41bb7e0a2ed6a2be57c55dcf37dc38" exitCode=137 Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.281894 4718 generic.go:334] "Generic (PLEG): container finished" podID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerID="c3f67d4074e98f6e445fe9d4b9b94ec84d6cde963c168517150840f12ada5799" exitCode=137 Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.281984 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5447c9669f-r2wq4" event={"ID":"ffcd0872-12a2-4dc9-bfde-22681ed5212f","Type":"ContainerDied","Data":"3e5fef0454fdbdb80fb1134eea5cb6a39c41bb7e0a2ed6a2be57c55dcf37dc38"} Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.282031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5447c9669f-r2wq4" event={"ID":"ffcd0872-12a2-4dc9-bfde-22681ed5212f","Type":"ContainerDied","Data":"c3f67d4074e98f6e445fe9d4b9b94ec84d6cde963c168517150840f12ada5799"} Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.284572 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2m7cd" event={"ID":"88793089-cfde-48d5-8670-880344ab6711","Type":"ContainerDied","Data":"49a7d8fe5faa0e56e78f681f1ef5d4030db2e3fefc3e8a722828326af76c9a19"} Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.284617 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49a7d8fe5faa0e56e78f681f1ef5d4030db2e3fefc3e8a722828326af76c9a19" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.284841 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2m7cd" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.896006 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.899878 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973207 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-config\") pod \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973298 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffcd0872-12a2-4dc9-bfde-22681ed5212f-horizon-secret-key\") pod \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973331 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f96c\" (UniqueName: \"kubernetes.io/projected/ffcd0872-12a2-4dc9-bfde-22681ed5212f-kube-api-access-7f96c\") pod \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973368 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffcd0872-12a2-4dc9-bfde-22681ed5212f-logs\") pod \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973518 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-dns-svc\") pod \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973568 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frlsc\" (UniqueName: \"kubernetes.io/projected/176e4753-0fcc-464b-b49e-a4b52cf26b5f-kube-api-access-frlsc\") pod \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973605 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-nb\") pod \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973638 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-sb\") pod \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\" (UID: \"176e4753-0fcc-464b-b49e-a4b52cf26b5f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973678 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-scripts\") pod \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973702 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-config-data\") pod \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\" (UID: \"ffcd0872-12a2-4dc9-bfde-22681ed5212f\") " Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.973903 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcd0872-12a2-4dc9-bfde-22681ed5212f-logs" (OuterVolumeSpecName: "logs") pod "ffcd0872-12a2-4dc9-bfde-22681ed5212f" (UID: "ffcd0872-12a2-4dc9-bfde-22681ed5212f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.975008 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffcd0872-12a2-4dc9-bfde-22681ed5212f-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.978335 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176e4753-0fcc-464b-b49e-a4b52cf26b5f-kube-api-access-frlsc" (OuterVolumeSpecName: "kube-api-access-frlsc") pod "176e4753-0fcc-464b-b49e-a4b52cf26b5f" (UID: "176e4753-0fcc-464b-b49e-a4b52cf26b5f"). InnerVolumeSpecName "kube-api-access-frlsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.978661 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcd0872-12a2-4dc9-bfde-22681ed5212f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ffcd0872-12a2-4dc9-bfde-22681ed5212f" (UID: "ffcd0872-12a2-4dc9-bfde-22681ed5212f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.981068 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcd0872-12a2-4dc9-bfde-22681ed5212f-kube-api-access-7f96c" (OuterVolumeSpecName: "kube-api-access-7f96c") pod "ffcd0872-12a2-4dc9-bfde-22681ed5212f" (UID: "ffcd0872-12a2-4dc9-bfde-22681ed5212f"). InnerVolumeSpecName "kube-api-access-7f96c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:48 crc kubenswrapper[4718]: I1123 15:02:48.997016 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-config-data" (OuterVolumeSpecName: "config-data") pod "ffcd0872-12a2-4dc9-bfde-22681ed5212f" (UID: "ffcd0872-12a2-4dc9-bfde-22681ed5212f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.021071 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-scripts" (OuterVolumeSpecName: "scripts") pod "ffcd0872-12a2-4dc9-bfde-22681ed5212f" (UID: "ffcd0872-12a2-4dc9-bfde-22681ed5212f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.027059 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "176e4753-0fcc-464b-b49e-a4b52cf26b5f" (UID: "176e4753-0fcc-464b-b49e-a4b52cf26b5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.035385 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-config" (OuterVolumeSpecName: "config") pod "176e4753-0fcc-464b-b49e-a4b52cf26b5f" (UID: "176e4753-0fcc-464b-b49e-a4b52cf26b5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.036001 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "176e4753-0fcc-464b-b49e-a4b52cf26b5f" (UID: "176e4753-0fcc-464b-b49e-a4b52cf26b5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.040786 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "176e4753-0fcc-464b-b49e-a4b52cf26b5f" (UID: "176e4753-0fcc-464b-b49e-a4b52cf26b5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.076307 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frlsc\" (UniqueName: \"kubernetes.io/projected/176e4753-0fcc-464b-b49e-a4b52cf26b5f-kube-api-access-frlsc\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.076542 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.076641 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.076727 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.076804 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffcd0872-12a2-4dc9-bfde-22681ed5212f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.076878 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.076950 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffcd0872-12a2-4dc9-bfde-22681ed5212f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.077003 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f96c\" (UniqueName: \"kubernetes.io/projected/ffcd0872-12a2-4dc9-bfde-22681ed5212f-kube-api-access-7f96c\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.077061 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176e4753-0fcc-464b-b49e-a4b52cf26b5f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.235959 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7fdf4df4d-qlcjn"] Nov 23 15:02:49 crc kubenswrapper[4718]: E1123 15:02:49.236685 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerName="dnsmasq-dns" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.236709 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerName="dnsmasq-dns" Nov 23 15:02:49 crc kubenswrapper[4718]: E1123 15:02:49.236731 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerName="init" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.236739 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerName="init" Nov 23 15:02:49 crc kubenswrapper[4718]: E1123 15:02:49.236766 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.236775 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon" Nov 23 15:02:49 crc kubenswrapper[4718]: E1123 15:02:49.236798 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88793089-cfde-48d5-8670-880344ab6711" containerName="placement-db-sync" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.236806 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="88793089-cfde-48d5-8670-880344ab6711" containerName="placement-db-sync" Nov 23 15:02:49 crc kubenswrapper[4718]: E1123 15:02:49.236819 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon-log" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.236827 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon-log" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.237052 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" containerName="dnsmasq-dns" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.237095 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="88793089-cfde-48d5-8670-880344ab6711" containerName="placement-db-sync" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.237109 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.237119 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" containerName="horizon-log" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.238226 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.240160 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.240349 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.245725 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ckbwr" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.247173 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.250956 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fdf4df4d-qlcjn"] Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.257382 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.280924 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-logs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.281033 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-combined-ca-bundle\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.281067 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-scripts\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.281094 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-public-tls-certs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.281123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-config-data\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.281738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9d6\" (UniqueName: \"kubernetes.io/projected/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-kube-api-access-ts9d6\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.281830 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-internal-tls-certs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.296146 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5447c9669f-r2wq4" event={"ID":"ffcd0872-12a2-4dc9-bfde-22681ed5212f","Type":"ContainerDied","Data":"dba514b4a6e404165512d7c2a6d221a4f607a3c8e18bc9d98a939bde12391cb4"} Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.296209 4718 scope.go:117] "RemoveContainer" containerID="3e5fef0454fdbdb80fb1134eea5cb6a39c41bb7e0a2ed6a2be57c55dcf37dc38" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.296162 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5447c9669f-r2wq4" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.311142 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xwqvg" event={"ID":"176e4753-0fcc-464b-b49e-a4b52cf26b5f","Type":"ContainerDied","Data":"2439336c868c3ca60f1bb92810989c92bb68332b2bf3717cf2a0c5bd2e7d56fc"} Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.311189 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xwqvg" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.314303 4718 generic.go:334] "Generic (PLEG): container finished" podID="2456d901-f349-4f7a-b9cc-63c9eba428c8" containerID="36fc12c4c4c04f3872dbd7dddd3dcf4c463fcaac134b0fceaab5eb6f76e13592" exitCode=0 Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.314342 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2rx99" event={"ID":"2456d901-f349-4f7a-b9cc-63c9eba428c8","Type":"ContainerDied","Data":"36fc12c4c4c04f3872dbd7dddd3dcf4c463fcaac134b0fceaab5eb6f76e13592"} Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.334085 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5447c9669f-r2wq4"] Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.340724 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5447c9669f-r2wq4"] Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.365027 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xwqvg"] Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.374653 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xwqvg"] Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.384115 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-combined-ca-bundle\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.384169 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-scripts\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.384198 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-public-tls-certs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.384234 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-config-data\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.384331 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9d6\" (UniqueName: \"kubernetes.io/projected/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-kube-api-access-ts9d6\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.384362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-internal-tls-certs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.384384 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-logs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.385047 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-logs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.389019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-combined-ca-bundle\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.389291 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-internal-tls-certs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.389782 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-config-data\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.390650 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-public-tls-certs\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.392090 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-scripts\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.406650 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9d6\" (UniqueName: \"kubernetes.io/projected/af2372a3-1e0c-4668-ab25-cfb8616a6d1b-kube-api-access-ts9d6\") pod \"placement-7fdf4df4d-qlcjn\" (UID: \"af2372a3-1e0c-4668-ab25-cfb8616a6d1b\") " pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.411079 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.411129 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.454715 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.463109 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.562647 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.644217 4718 scope.go:117] "RemoveContainer" containerID="c3f67d4074e98f6e445fe9d4b9b94ec84d6cde963c168517150840f12ada5799" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.716169 4718 scope.go:117] "RemoveContainer" containerID="a5feb3db2ea81734ba314bc36324d8893ea0499b549fd108c1ec3f0f13b4e9c8" Nov 23 15:02:49 crc kubenswrapper[4718]: I1123 15:02:49.741679 4718 scope.go:117] "RemoveContainer" containerID="a42a3873e671d689611cd65624198cebe225bf2642691942d7c498bac4b4753e" Nov 23 15:02:49 crc kubenswrapper[4718]: E1123 15:02:49.950131 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.181473 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fdf4df4d-qlcjn"] Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.324624 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fdf4df4d-qlcjn" event={"ID":"af2372a3-1e0c-4668-ab25-cfb8616a6d1b","Type":"ContainerStarted","Data":"c06a6cb59e1ceccb17b3c0c8fc260737b77f0d8bd707a67cacb792f391e388ac"} Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.328261 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="ceilometer-notification-agent" containerID="cri-o://0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e" gracePeriod=30 Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.328301 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="proxy-httpd" containerID="cri-o://ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4" gracePeriod=30 Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.328354 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="sg-core" containerID="cri-o://a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2" gracePeriod=30 Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.328172 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerStarted","Data":"ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4"} Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.328950 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.330602 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.330630 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.464846 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176e4753-0fcc-464b-b49e-a4b52cf26b5f" path="/var/lib/kubelet/pods/176e4753-0fcc-464b-b49e-a4b52cf26b5f/volumes" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.465849 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcd0872-12a2-4dc9-bfde-22681ed5212f" path="/var/lib/kubelet/pods/ffcd0872-12a2-4dc9-bfde-22681ed5212f/volumes" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.557878 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2rx99" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.613779 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-combined-ca-bundle\") pod \"2456d901-f349-4f7a-b9cc-63c9eba428c8\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.613817 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zzfw\" (UniqueName: \"kubernetes.io/projected/2456d901-f349-4f7a-b9cc-63c9eba428c8-kube-api-access-9zzfw\") pod \"2456d901-f349-4f7a-b9cc-63c9eba428c8\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.613936 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-db-sync-config-data\") pod \"2456d901-f349-4f7a-b9cc-63c9eba428c8\" (UID: \"2456d901-f349-4f7a-b9cc-63c9eba428c8\") " Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.617765 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2456d901-f349-4f7a-b9cc-63c9eba428c8" (UID: "2456d901-f349-4f7a-b9cc-63c9eba428c8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.623728 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2456d901-f349-4f7a-b9cc-63c9eba428c8-kube-api-access-9zzfw" (OuterVolumeSpecName: "kube-api-access-9zzfw") pod "2456d901-f349-4f7a-b9cc-63c9eba428c8" (UID: "2456d901-f349-4f7a-b9cc-63c9eba428c8"). InnerVolumeSpecName "kube-api-access-9zzfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.638855 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2456d901-f349-4f7a-b9cc-63c9eba428c8" (UID: "2456d901-f349-4f7a-b9cc-63c9eba428c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.716238 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.716276 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zzfw\" (UniqueName: \"kubernetes.io/projected/2456d901-f349-4f7a-b9cc-63c9eba428c8-kube-api-access-9zzfw\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:50 crc kubenswrapper[4718]: I1123 15:02:50.716288 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2456d901-f349-4f7a-b9cc-63c9eba428c8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.341470 4718 generic.go:334] "Generic (PLEG): container finished" podID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerID="b33354d724c85f5c24909dfdf0de020feb9b6de40d0a896f864c2518ec57bb14" exitCode=0 Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.341546 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5696759568-pxlzs" event={"ID":"5dba5adf-299f-404c-91f9-c5848e9babe4","Type":"ContainerDied","Data":"b33354d724c85f5c24909dfdf0de020feb9b6de40d0a896f864c2518ec57bb14"} Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.354096 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2rx99" event={"ID":"2456d901-f349-4f7a-b9cc-63c9eba428c8","Type":"ContainerDied","Data":"e2dbd12bc6adc1b71ea1cf24735d3ebdae298726837ec54d6e3c199dc37b24bd"} Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.354141 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2dbd12bc6adc1b71ea1cf24735d3ebdae298726837ec54d6e3c199dc37b24bd" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.354222 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2rx99" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.359165 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fdf4df4d-qlcjn" event={"ID":"af2372a3-1e0c-4668-ab25-cfb8616a6d1b","Type":"ContainerStarted","Data":"1ff33036d4b55c1285d31f650d41bdcd6a2014dd2b7a49bf8e9fbb11e934c19d"} Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.359307 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fdf4df4d-qlcjn" event={"ID":"af2372a3-1e0c-4668-ab25-cfb8616a6d1b","Type":"ContainerStarted","Data":"29ab5a1854cf79d9961f9f8cd57bb87956993f16dbe16a5909f40dd8b710eab1"} Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.359345 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.375222 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9b11789-7642-4d03-a060-26842da8ab4b" containerID="ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4" exitCode=0 Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.375262 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9b11789-7642-4d03-a060-26842da8ab4b" containerID="a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2" exitCode=2 Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.375346 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerDied","Data":"ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4"} Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.375539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerDied","Data":"a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2"} Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.396188 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7fdf4df4d-qlcjn" podStartSLOduration=2.396170628 podStartE2EDuration="2.396170628s" podCreationTimestamp="2025-11-23 15:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:51.388414422 +0000 UTC m=+1022.628034296" watchObservedRunningTime="2025-11-23 15:02:51.396170628 +0000 UTC m=+1022.635790472" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.507993 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-95d8d74d7-xhwwk"] Nov 23 15:02:51 crc kubenswrapper[4718]: E1123 15:02:51.508425 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2456d901-f349-4f7a-b9cc-63c9eba428c8" containerName="barbican-db-sync" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.508470 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2456d901-f349-4f7a-b9cc-63c9eba428c8" containerName="barbican-db-sync" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.508706 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2456d901-f349-4f7a-b9cc-63c9eba428c8" containerName="barbican-db-sync" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.509597 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.512780 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.513658 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4fxdr" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.513721 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.528753 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgqv\" (UniqueName: \"kubernetes.io/projected/4fcdc6b9-b082-4a9d-b905-7978d941b38f-kube-api-access-mlgqv\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.528867 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-combined-ca-bundle\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.528910 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-config-data\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.528989 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-config-data-custom\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.529043 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fcdc6b9-b082-4a9d-b905-7978d941b38f-logs\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.543080 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-95d8d74d7-xhwwk"] Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.572492 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.572539 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.583372 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-74dd8678b6-fs5mq"] Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.594595 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.598219 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.617387 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-8lb6x"] Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.630281 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.637496 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-config-data\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.637658 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-config-data-custom\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.637738 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fcdc6b9-b082-4a9d-b905-7978d941b38f-logs\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.637939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgqv\" (UniqueName: \"kubernetes.io/projected/4fcdc6b9-b082-4a9d-b905-7978d941b38f-kube-api-access-mlgqv\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.638079 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-combined-ca-bundle\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.639653 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fcdc6b9-b082-4a9d-b905-7978d941b38f-logs\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.650164 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.654939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-config-data-custom\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.661390 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74dd8678b6-fs5mq"] Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.670624 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-combined-ca-bundle\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.676758 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgqv\" (UniqueName: \"kubernetes.io/projected/4fcdc6b9-b082-4a9d-b905-7978d941b38f-kube-api-access-mlgqv\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.695188 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fcdc6b9-b082-4a9d-b905-7978d941b38f-config-data\") pod \"barbican-worker-95d8d74d7-xhwwk\" (UID: \"4fcdc6b9-b082-4a9d-b905-7978d941b38f\") " pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.696978 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.709582 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-8lb6x"] Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4lt\" (UniqueName: \"kubernetes.io/projected/4c8a309d-d182-4ad3-9818-a2c47fa25cec-kube-api-access-sc4lt\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739244 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-config\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739270 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739296 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-config-data\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739355 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-config-data-custom\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739381 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739409 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739472 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739488 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-logs\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739504 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdprn\" (UniqueName: \"kubernetes.io/projected/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-kube-api-access-jdprn\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.739608 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-combined-ca-bundle\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.796754 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-677cc89bb-l2pwx"] Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.798233 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.805855 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841716 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4lt\" (UniqueName: \"kubernetes.io/projected/4c8a309d-d182-4ad3-9818-a2c47fa25cec-kube-api-access-sc4lt\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841771 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-config\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841790 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841808 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-config-data\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841865 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-config-data-custom\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841884 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841947 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-logs\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841963 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdprn\" (UniqueName: \"kubernetes.io/projected/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-kube-api-access-jdprn\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841978 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.841995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-combined-ca-bundle\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.842132 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-95d8d74d7-xhwwk" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.843223 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.843488 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.843685 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-logs\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.844550 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-config\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.844673 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.849294 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-677cc89bb-l2pwx"] Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.849849 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.851024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-combined-ca-bundle\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.854088 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-config-data-custom\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.866041 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdprn\" (UniqueName: \"kubernetes.io/projected/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-kube-api-access-jdprn\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.866665 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4lt\" (UniqueName: \"kubernetes.io/projected/4c8a309d-d182-4ad3-9818-a2c47fa25cec-kube-api-access-sc4lt\") pod \"dnsmasq-dns-586bdc5f9-8lb6x\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.879272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effa4eb1-dc32-4d96-8f19-0eaae852f9a1-config-data\") pod \"barbican-keystone-listener-74dd8678b6-fs5mq\" (UID: \"effa4eb1-dc32-4d96-8f19-0eaae852f9a1\") " pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.955736 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f563293-98ea-4cd4-9067-95a64333591d-logs\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.956026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.956050 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8llk\" (UniqueName: \"kubernetes.io/projected/0f563293-98ea-4cd4-9067-95a64333591d-kube-api-access-r8llk\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.956083 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-combined-ca-bundle\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:51 crc kubenswrapper[4718]: I1123 15:02:51.956131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data-custom\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.041834 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.051349 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.064748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.064807 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8llk\" (UniqueName: \"kubernetes.io/projected/0f563293-98ea-4cd4-9067-95a64333591d-kube-api-access-r8llk\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.064854 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-combined-ca-bundle\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.064918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data-custom\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.064999 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f563293-98ea-4cd4-9067-95a64333591d-logs\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.065504 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f563293-98ea-4cd4-9067-95a64333591d-logs\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.070020 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data-custom\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.070273 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.091526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-combined-ca-bundle\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.094325 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8llk\" (UniqueName: \"kubernetes.io/projected/0f563293-98ea-4cd4-9067-95a64333591d-kube-api-access-r8llk\") pod \"barbican-api-677cc89bb-l2pwx\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.127726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.392641 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.392917 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.393073 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.419407 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-95d8d74d7-xhwwk"] Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.575607 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-8lb6x"] Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.649529 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-677cc89bb-l2pwx"] Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.654539 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74dd8678b6-fs5mq"] Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.980739 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:52 crc kubenswrapper[4718]: I1123 15:02:52.981155 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.234402 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302128 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-scripts\") pod \"b9b11789-7642-4d03-a060-26842da8ab4b\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302215 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-combined-ca-bundle\") pod \"b9b11789-7642-4d03-a060-26842da8ab4b\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302267 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-sg-core-conf-yaml\") pod \"b9b11789-7642-4d03-a060-26842da8ab4b\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302314 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-log-httpd\") pod \"b9b11789-7642-4d03-a060-26842da8ab4b\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302368 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-config-data\") pod \"b9b11789-7642-4d03-a060-26842da8ab4b\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302394 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl4b4\" (UniqueName: \"kubernetes.io/projected/b9b11789-7642-4d03-a060-26842da8ab4b-kube-api-access-tl4b4\") pod \"b9b11789-7642-4d03-a060-26842da8ab4b\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302502 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-run-httpd\") pod \"b9b11789-7642-4d03-a060-26842da8ab4b\" (UID: \"b9b11789-7642-4d03-a060-26842da8ab4b\") " Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302853 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b9b11789-7642-4d03-a060-26842da8ab4b" (UID: "b9b11789-7642-4d03-a060-26842da8ab4b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.302890 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b9b11789-7642-4d03-a060-26842da8ab4b" (UID: "b9b11789-7642-4d03-a060-26842da8ab4b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.303077 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.303091 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b11789-7642-4d03-a060-26842da8ab4b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.308261 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-scripts" (OuterVolumeSpecName: "scripts") pod "b9b11789-7642-4d03-a060-26842da8ab4b" (UID: "b9b11789-7642-4d03-a060-26842da8ab4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.309687 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b11789-7642-4d03-a060-26842da8ab4b-kube-api-access-tl4b4" (OuterVolumeSpecName: "kube-api-access-tl4b4") pod "b9b11789-7642-4d03-a060-26842da8ab4b" (UID: "b9b11789-7642-4d03-a060-26842da8ab4b"). InnerVolumeSpecName "kube-api-access-tl4b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.313230 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.338623 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b9b11789-7642-4d03-a060-26842da8ab4b" (UID: "b9b11789-7642-4d03-a060-26842da8ab4b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.362624 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9b11789-7642-4d03-a060-26842da8ab4b" (UID: "b9b11789-7642-4d03-a060-26842da8ab4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.405415 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.406554 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl4b4\" (UniqueName: \"kubernetes.io/projected/b9b11789-7642-4d03-a060-26842da8ab4b-kube-api-access-tl4b4\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.406573 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.406588 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.417048 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-95d8d74d7-xhwwk" event={"ID":"4fcdc6b9-b082-4a9d-b905-7978d941b38f","Type":"ContainerStarted","Data":"3053cb096d656b79af9b6e91f2cfa22f4baedcc774ad416fc1d61cdec4a1545e"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.423196 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-config-data" (OuterVolumeSpecName: "config-data") pod "b9b11789-7642-4d03-a060-26842da8ab4b" (UID: "b9b11789-7642-4d03-a060-26842da8ab4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.437715 4718 generic.go:334] "Generic (PLEG): container finished" podID="b9b11789-7642-4d03-a060-26842da8ab4b" containerID="0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e" exitCode=0 Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.437819 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerDied","Data":"0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.437852 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b11789-7642-4d03-a060-26842da8ab4b","Type":"ContainerDied","Data":"89b5a2823cc325082a8b7513c0810c7970d300df77dd8a8060b55c7652dac76b"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.437871 4718 scope.go:117] "RemoveContainer" containerID="ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.438017 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.457236 4718 generic.go:334] "Generic (PLEG): container finished" podID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerID="6050772260a85e73dbc998e01d680a6da0078372e31204af4ba0c48b528323e8" exitCode=0 Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.457340 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" event={"ID":"4c8a309d-d182-4ad3-9818-a2c47fa25cec","Type":"ContainerDied","Data":"6050772260a85e73dbc998e01d680a6da0078372e31204af4ba0c48b528323e8"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.457379 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" event={"ID":"4c8a309d-d182-4ad3-9818-a2c47fa25cec","Type":"ContainerStarted","Data":"7a3facdfdd0ea72aa7111662fac60ea2f2ddac429c79367211d27a101bb95c75"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.465707 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" event={"ID":"effa4eb1-dc32-4d96-8f19-0eaae852f9a1","Type":"ContainerStarted","Data":"2186715626e427c93bb8e423e851cb598c585de3be19c0acbeaf6a1bf6df75df"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.468569 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677cc89bb-l2pwx" event={"ID":"0f563293-98ea-4cd4-9067-95a64333591d","Type":"ContainerStarted","Data":"801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.468667 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677cc89bb-l2pwx" event={"ID":"0f563293-98ea-4cd4-9067-95a64333591d","Type":"ContainerStarted","Data":"19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.468686 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677cc89bb-l2pwx" event={"ID":"0f563293-98ea-4cd4-9067-95a64333591d","Type":"ContainerStarted","Data":"d1dee27b40446b32d8439280124f70215db5951dd90790640802867a91514afa"} Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.469589 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.469636 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.507888 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b11789-7642-4d03-a060-26842da8ab4b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.529028 4718 scope.go:117] "RemoveContainer" containerID="a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.560149 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-677cc89bb-l2pwx" podStartSLOduration=2.560125203 podStartE2EDuration="2.560125203s" podCreationTimestamp="2025-11-23 15:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:53.526010695 +0000 UTC m=+1024.765630539" watchObservedRunningTime="2025-11-23 15:02:53.560125203 +0000 UTC m=+1024.799745047" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.573688 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.608835 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.628516 4718 scope.go:117] "RemoveContainer" containerID="0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.665262 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:02:53 crc kubenswrapper[4718]: E1123 15:02:53.665714 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="ceilometer-notification-agent" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.665733 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="ceilometer-notification-agent" Nov 23 15:02:53 crc kubenswrapper[4718]: E1123 15:02:53.665745 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="sg-core" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.665750 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="sg-core" Nov 23 15:02:53 crc kubenswrapper[4718]: E1123 15:02:53.665768 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="proxy-httpd" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.665774 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="proxy-httpd" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.665957 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="ceilometer-notification-agent" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.665968 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="sg-core" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.665987 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" containerName="proxy-httpd" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.667866 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.674413 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.674606 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.689094 4718 scope.go:117] "RemoveContainer" containerID="ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.690073 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:02:53 crc kubenswrapper[4718]: E1123 15:02:53.691641 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4\": container with ID starting with ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4 not found: ID does not exist" containerID="ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.691680 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4"} err="failed to get container status \"ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4\": rpc error: code = NotFound desc = could not find container \"ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4\": container with ID starting with ecce5db0df4fa039d4c9e0eca7c87559def134c2f6e506e21f2eb3b8b59cbcf4 not found: ID does not exist" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.691704 4718 scope.go:117] "RemoveContainer" containerID="a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2" Nov 23 15:02:53 crc kubenswrapper[4718]: E1123 15:02:53.692056 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2\": container with ID starting with a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2 not found: ID does not exist" containerID="a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.692077 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2"} err="failed to get container status \"a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2\": rpc error: code = NotFound desc = could not find container \"a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2\": container with ID starting with a31367ac24e687a1e97db4eab5799345d7807def120fea0c0e8b7bbb3c7226f2 not found: ID does not exist" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.692099 4718 scope.go:117] "RemoveContainer" containerID="0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e" Nov 23 15:02:53 crc kubenswrapper[4718]: E1123 15:02:53.692328 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e\": container with ID starting with 0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e not found: ID does not exist" containerID="0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.692349 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e"} err="failed to get container status \"0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e\": rpc error: code = NotFound desc = could not find container \"0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e\": container with ID starting with 0bc56d42abc1cb56ca15c69af2e2cd574714052c14728e576d622850687d287e not found: ID does not exist" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.715568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-scripts\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.715684 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.715737 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-run-httpd\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.715763 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.715802 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-log-httpd\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.715831 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-config-data\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.715853 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhp9\" (UniqueName: \"kubernetes.io/projected/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-kube-api-access-xvhp9\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.817209 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.817273 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-run-httpd\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.817297 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.817330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-log-httpd\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.817359 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-config-data\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.817381 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvhp9\" (UniqueName: \"kubernetes.io/projected/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-kube-api-access-xvhp9\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.818912 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-run-httpd\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.819291 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-scripts\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.822835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.823319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-log-httpd\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.824205 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-scripts\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.824617 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.833640 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-config-data\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:53 crc kubenswrapper[4718]: I1123 15:02:53.839078 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvhp9\" (UniqueName: \"kubernetes.io/projected/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-kube-api-access-xvhp9\") pod \"ceilometer-0\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " pod="openstack/ceilometer-0" Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.053277 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.352232 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.459658 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b11789-7642-4d03-a060-26842da8ab4b" path="/var/lib/kubelet/pods/b9b11789-7642-4d03-a060-26842da8ab4b/volumes" Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.481547 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" event={"ID":"4c8a309d-d182-4ad3-9818-a2c47fa25cec","Type":"ContainerStarted","Data":"f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361"} Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.481991 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.483618 4718 generic.go:334] "Generic (PLEG): container finished" podID="2de4e428-ba8b-43d2-893d-e6f020997e5b" containerID="8a49a7896dc8a92f8b9eebd99cf02527effd8db093472fdc6f4b6e03f796fc95" exitCode=0 Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.483682 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cfbmg" event={"ID":"2de4e428-ba8b-43d2-893d-e6f020997e5b","Type":"ContainerDied","Data":"8a49a7896dc8a92f8b9eebd99cf02527effd8db093472fdc6f4b6e03f796fc95"} Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.483846 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.483868 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 15:02:54 crc kubenswrapper[4718]: I1123 15:02:54.522911 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" podStartSLOduration=3.522889678 podStartE2EDuration="3.522889678s" podCreationTimestamp="2025-11-23 15:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:54.502684566 +0000 UTC m=+1025.742304410" watchObservedRunningTime="2025-11-23 15:02:54.522889678 +0000 UTC m=+1025.762509522" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.113506 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.203138 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5c6d99969d-lw2d7"] Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.204966 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.207035 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.207303 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.248343 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc09b60-8801-4296-98e6-94a2e5ac8697-logs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.248460 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm8b\" (UniqueName: \"kubernetes.io/projected/cdc09b60-8801-4296-98e6-94a2e5ac8697-kube-api-access-qwm8b\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.248487 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-config-data-custom\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.248511 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-combined-ca-bundle\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.248528 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-internal-tls-certs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.248592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-config-data\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.248608 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-public-tls-certs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.250902 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c6d99969d-lw2d7"] Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.313980 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.350111 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-config-data\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.350358 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-public-tls-certs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.350452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc09b60-8801-4296-98e6-94a2e5ac8697-logs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.350880 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc09b60-8801-4296-98e6-94a2e5ac8697-logs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.351252 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwm8b\" (UniqueName: \"kubernetes.io/projected/cdc09b60-8801-4296-98e6-94a2e5ac8697-kube-api-access-qwm8b\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.351308 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-config-data-custom\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.351338 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-combined-ca-bundle\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.357789 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-internal-tls-certs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.358390 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-config-data-custom\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.358390 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-public-tls-certs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.359423 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-config-data\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.368036 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-combined-ca-bundle\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.368512 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdc09b60-8801-4296-98e6-94a2e5ac8697-internal-tls-certs\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.371954 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwm8b\" (UniqueName: \"kubernetes.io/projected/cdc09b60-8801-4296-98e6-94a2e5ac8697-kube-api-access-qwm8b\") pod \"barbican-api-5c6d99969d-lw2d7\" (UID: \"cdc09b60-8801-4296-98e6-94a2e5ac8697\") " pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.501102 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" event={"ID":"effa4eb1-dc32-4d96-8f19-0eaae852f9a1","Type":"ContainerStarted","Data":"655efa256d029f8ef2dfcd803493c416f09ec8bd99c2947169df909a90956ca3"} Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.501146 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" event={"ID":"effa4eb1-dc32-4d96-8f19-0eaae852f9a1","Type":"ContainerStarted","Data":"81043f0e07872281a36901fecfa3bf6799ace2d9131bf58f4f5bb5d9cdcf9526"} Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.503635 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-95d8d74d7-xhwwk" event={"ID":"4fcdc6b9-b082-4a9d-b905-7978d941b38f","Type":"ContainerStarted","Data":"9664e6666cdb77e0a759c289db2a21adcc0458305e933a3cf6211c2cb4a3356a"} Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.503664 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-95d8d74d7-xhwwk" event={"ID":"4fcdc6b9-b082-4a9d-b905-7978d941b38f","Type":"ContainerStarted","Data":"1192caaa19059faaf42fbd8af311d1a57509f829936e17833e05e37826881864"} Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.506296 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerStarted","Data":"060226526cf6c190ac4cadd70e251d7fce6a90ad6a7917182b1605ea00dfce16"} Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.506355 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.520617 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-74dd8678b6-fs5mq" podStartSLOduration=2.567985474 podStartE2EDuration="4.520601994s" podCreationTimestamp="2025-11-23 15:02:51 +0000 UTC" firstStartedPulling="2025-11-23 15:02:52.672867639 +0000 UTC m=+1023.912487483" lastFinishedPulling="2025-11-23 15:02:54.625484159 +0000 UTC m=+1025.865104003" observedRunningTime="2025-11-23 15:02:55.517410194 +0000 UTC m=+1026.757030038" watchObservedRunningTime="2025-11-23 15:02:55.520601994 +0000 UTC m=+1026.760221838" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.531701 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.532979 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.556993 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-95d8d74d7-xhwwk" podStartSLOduration=2.364240098 podStartE2EDuration="4.556971074s" podCreationTimestamp="2025-11-23 15:02:51 +0000 UTC" firstStartedPulling="2025-11-23 15:02:52.430143131 +0000 UTC m=+1023.669762975" lastFinishedPulling="2025-11-23 15:02:54.622874107 +0000 UTC m=+1025.862493951" observedRunningTime="2025-11-23 15:02:55.549429574 +0000 UTC m=+1026.789049418" watchObservedRunningTime="2025-11-23 15:02:55.556971074 +0000 UTC m=+1026.796590918" Nov 23 15:02:55 crc kubenswrapper[4718]: I1123 15:02:55.947226 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.069678 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-config-data\") pod \"2de4e428-ba8b-43d2-893d-e6f020997e5b\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.070094 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de4e428-ba8b-43d2-893d-e6f020997e5b-etc-machine-id\") pod \"2de4e428-ba8b-43d2-893d-e6f020997e5b\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.070145 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-scripts\") pod \"2de4e428-ba8b-43d2-893d-e6f020997e5b\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.070203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-combined-ca-bundle\") pod \"2de4e428-ba8b-43d2-893d-e6f020997e5b\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.070241 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-db-sync-config-data\") pod \"2de4e428-ba8b-43d2-893d-e6f020997e5b\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.070319 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnclm\" (UniqueName: \"kubernetes.io/projected/2de4e428-ba8b-43d2-893d-e6f020997e5b-kube-api-access-jnclm\") pod \"2de4e428-ba8b-43d2-893d-e6f020997e5b\" (UID: \"2de4e428-ba8b-43d2-893d-e6f020997e5b\") " Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.072539 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de4e428-ba8b-43d2-893d-e6f020997e5b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2de4e428-ba8b-43d2-893d-e6f020997e5b" (UID: "2de4e428-ba8b-43d2-893d-e6f020997e5b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.078655 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-scripts" (OuterVolumeSpecName: "scripts") pod "2de4e428-ba8b-43d2-893d-e6f020997e5b" (UID: "2de4e428-ba8b-43d2-893d-e6f020997e5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.081569 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2de4e428-ba8b-43d2-893d-e6f020997e5b" (UID: "2de4e428-ba8b-43d2-893d-e6f020997e5b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.104661 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de4e428-ba8b-43d2-893d-e6f020997e5b-kube-api-access-jnclm" (OuterVolumeSpecName: "kube-api-access-jnclm") pod "2de4e428-ba8b-43d2-893d-e6f020997e5b" (UID: "2de4e428-ba8b-43d2-893d-e6f020997e5b"). InnerVolumeSpecName "kube-api-access-jnclm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.112582 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2de4e428-ba8b-43d2-893d-e6f020997e5b" (UID: "2de4e428-ba8b-43d2-893d-e6f020997e5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.166218 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c6d99969d-lw2d7"] Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.168636 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-config-data" (OuterVolumeSpecName: "config-data") pod "2de4e428-ba8b-43d2-893d-e6f020997e5b" (UID: "2de4e428-ba8b-43d2-893d-e6f020997e5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.173051 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnclm\" (UniqueName: \"kubernetes.io/projected/2de4e428-ba8b-43d2-893d-e6f020997e5b-kube-api-access-jnclm\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.173071 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.173080 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de4e428-ba8b-43d2-893d-e6f020997e5b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.173092 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.173104 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.173114 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de4e428-ba8b-43d2-893d-e6f020997e5b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.540554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cfbmg" event={"ID":"2de4e428-ba8b-43d2-893d-e6f020997e5b","Type":"ContainerDied","Data":"4214748061dedd11c505dc9751733a8d14d47fc28c98da134f548b0456972e12"} Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.540868 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4214748061dedd11c505dc9751733a8d14d47fc28c98da134f548b0456972e12" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.540932 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cfbmg" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.548661 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c6d99969d-lw2d7" event={"ID":"cdc09b60-8801-4296-98e6-94a2e5ac8697","Type":"ContainerStarted","Data":"403fd28a91b8af01c9525ce20cff696c568d288665f0c2f62b9e13da31ca2099"} Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.548708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c6d99969d-lw2d7" event={"ID":"cdc09b60-8801-4296-98e6-94a2e5ac8697","Type":"ContainerStarted","Data":"3191f5b536d9f428ac96595dc2a270740aaa90269427af61c19b7eff2ec476ab"} Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.559472 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerStarted","Data":"8aea759c87cbd913895f69805c9fdeecaf55bb9d1cbc1219826f0b4cfe4f2adf"} Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.756397 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:02:56 crc kubenswrapper[4718]: E1123 15:02:56.756782 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de4e428-ba8b-43d2-893d-e6f020997e5b" containerName="cinder-db-sync" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.756798 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de4e428-ba8b-43d2-893d-e6f020997e5b" containerName="cinder-db-sync" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.756955 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de4e428-ba8b-43d2-893d-e6f020997e5b" containerName="cinder-db-sync" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.757947 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.761556 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j8z9d" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.761805 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.761907 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.761933 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.780709 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.826262 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-8lb6x"] Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.826529 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" podUID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerName="dnsmasq-dns" containerID="cri-o://f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361" gracePeriod=10 Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.872655 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-w86hh"] Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.876888 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.897370 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-scripts\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.897432 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.897467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlmph\" (UniqueName: \"kubernetes.io/projected/e60da8e4-38fb-49f2-9325-8d8d55109b49-kube-api-access-rlmph\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.897500 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60da8e4-38fb-49f2-9325-8d8d55109b49-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.897528 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.897547 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.898943 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-w86hh"] Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.995524 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.998821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-scripts\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.998873 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.998891 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-config\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.998917 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.998937 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlmph\" (UniqueName: \"kubernetes.io/projected/e60da8e4-38fb-49f2-9325-8d8d55109b49-kube-api-access-rlmph\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.998962 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.998982 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.999000 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60da8e4-38fb-49f2-9325-8d8d55109b49-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.999017 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/ef450690-daaa-42a6-9b05-74dc308bdf50-kube-api-access-tfm8k\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.999040 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.999059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:56 crc kubenswrapper[4718]: I1123 15:02:56.999082 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.000065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60da8e4-38fb-49f2-9325-8d8d55109b49-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.020189 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.020490 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.022843 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-scripts\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.022856 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.023195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.023317 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.024555 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.048283 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlmph\" (UniqueName: \"kubernetes.io/projected/e60da8e4-38fb-49f2-9325-8d8d55109b49-kube-api-access-rlmph\") pod \"cinder-scheduler-0\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " pod="openstack/cinder-scheduler-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.074180 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.101862 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.101917 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.101946 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/ef450690-daaa-42a6-9b05-74dc308bdf50-kube-api-access-tfm8k\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.101980 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.102079 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.102095 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-config\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.102995 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-config\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.104432 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.105024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.105655 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.106135 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.131496 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/ef450690-daaa-42a6-9b05-74dc308bdf50-kube-api-access-tfm8k\") pod \"dnsmasq-dns-795f4db4bc-w86hh\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.203502 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.203913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c3b676-60b7-4d06-b284-2be1fd9824c5-logs\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.203960 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.203977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f77xx\" (UniqueName: \"kubernetes.io/projected/23c3b676-60b7-4d06-b284-2be1fd9824c5-kube-api-access-f77xx\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.203997 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.204071 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-scripts\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.204142 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.204189 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23c3b676-60b7-4d06-b284-2be1fd9824c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.307827 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.307914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23c3b676-60b7-4d06-b284-2be1fd9824c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.308027 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c3b676-60b7-4d06-b284-2be1fd9824c5-logs\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.308110 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.308139 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f77xx\" (UniqueName: \"kubernetes.io/projected/23c3b676-60b7-4d06-b284-2be1fd9824c5-kube-api-access-f77xx\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.308173 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.308236 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-scripts\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.309633 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c3b676-60b7-4d06-b284-2be1fd9824c5-logs\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.310960 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23c3b676-60b7-4d06-b284-2be1fd9824c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.322570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.325166 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-scripts\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.334678 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.339092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f77xx\" (UniqueName: \"kubernetes.io/projected/23c3b676-60b7-4d06-b284-2be1fd9824c5-kube-api-access-f77xx\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.349143 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.515605 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.630248 4718 generic.go:334] "Generic (PLEG): container finished" podID="4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" containerID="ecc8668f1c44c21f2579dc3d2c0c6c98e4b067637d0542bebce13521bd1fe8e1" exitCode=0 Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.630329 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bppgm" event={"ID":"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58","Type":"ContainerDied","Data":"ecc8668f1c44c21f2579dc3d2c0c6c98e4b067637d0542bebce13521bd1fe8e1"} Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.646419 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerStarted","Data":"ceaf2b73c7bf8744805d2a7cfdb0d60e152213c03b9c16ff8c2df4a3c7035586"} Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.674706 4718 generic.go:334] "Generic (PLEG): container finished" podID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerID="f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361" exitCode=0 Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.674812 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" event={"ID":"4c8a309d-d182-4ad3-9818-a2c47fa25cec","Type":"ContainerDied","Data":"f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361"} Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.686642 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.690802 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c6d99969d-lw2d7" event={"ID":"cdc09b60-8801-4296-98e6-94a2e5ac8697","Type":"ContainerStarted","Data":"3a8da16109690eabe8894b114a09adb71d7a8ff7ea18cb73cbc7c91977a3cdd1"} Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.691085 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.691165 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.726834 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5c6d99969d-lw2d7" podStartSLOduration=2.726809993 podStartE2EDuration="2.726809993s" podCreationTimestamp="2025-11-23 15:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:02:57.714639486 +0000 UTC m=+1028.954259350" watchObservedRunningTime="2025-11-23 15:02:57.726809993 +0000 UTC m=+1028.966429847" Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.836049 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-w86hh"] Nov 23 15:02:57 crc kubenswrapper[4718]: I1123 15:02:57.909720 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:57.999535 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.030665 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-config\") pod \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.030842 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4lt\" (UniqueName: \"kubernetes.io/projected/4c8a309d-d182-4ad3-9818-a2c47fa25cec-kube-api-access-sc4lt\") pod \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.031424 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-sb\") pod \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.031533 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-nb\") pod \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.031586 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-svc\") pod \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.031638 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-swift-storage-0\") pod \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\" (UID: \"4c8a309d-d182-4ad3-9818-a2c47fa25cec\") " Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.068885 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8a309d-d182-4ad3-9818-a2c47fa25cec-kube-api-access-sc4lt" (OuterVolumeSpecName: "kube-api-access-sc4lt") pod "4c8a309d-d182-4ad3-9818-a2c47fa25cec" (UID: "4c8a309d-d182-4ad3-9818-a2c47fa25cec"). InnerVolumeSpecName "kube-api-access-sc4lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.139146 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4lt\" (UniqueName: \"kubernetes.io/projected/4c8a309d-d182-4ad3-9818-a2c47fa25cec-kube-api-access-sc4lt\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.149076 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c8a309d-d182-4ad3-9818-a2c47fa25cec" (UID: "4c8a309d-d182-4ad3-9818-a2c47fa25cec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.153390 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4c8a309d-d182-4ad3-9818-a2c47fa25cec" (UID: "4c8a309d-d182-4ad3-9818-a2c47fa25cec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.154109 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-config" (OuterVolumeSpecName: "config") pod "4c8a309d-d182-4ad3-9818-a2c47fa25cec" (UID: "4c8a309d-d182-4ad3-9818-a2c47fa25cec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.169618 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c8a309d-d182-4ad3-9818-a2c47fa25cec" (UID: "4c8a309d-d182-4ad3-9818-a2c47fa25cec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.211123 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c8a309d-d182-4ad3-9818-a2c47fa25cec" (UID: "4c8a309d-d182-4ad3-9818-a2c47fa25cec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.242274 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.242302 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.242313 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.242322 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.242341 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8a309d-d182-4ad3-9818-a2c47fa25cec-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.709586 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23c3b676-60b7-4d06-b284-2be1fd9824c5","Type":"ContainerStarted","Data":"81e59305d68468a67b6d12e331e95d8b330e5afc6bf805187d8413fe39c18b4d"} Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.710691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e60da8e4-38fb-49f2-9325-8d8d55109b49","Type":"ContainerStarted","Data":"0ac9efd74ca14e5a0a6713309fe1ee23feafcd9c6b150dd457cabc75188ad3fb"} Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.712990 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerStarted","Data":"47d4fa454c9683a2dc39b85e1937880f2876ce282461eed942170add5cba7ed4"} Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.716368 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" event={"ID":"ef450690-daaa-42a6-9b05-74dc308bdf50","Type":"ContainerStarted","Data":"de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad"} Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.716421 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" event={"ID":"ef450690-daaa-42a6-9b05-74dc308bdf50","Type":"ContainerStarted","Data":"7726eb7428a5265f67f6d8d9282f512dec27e2a09e97b3b6021eb0871bcdbc8b"} Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.721060 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" event={"ID":"4c8a309d-d182-4ad3-9818-a2c47fa25cec","Type":"ContainerDied","Data":"7a3facdfdd0ea72aa7111662fac60ea2f2ddac429c79367211d27a101bb95c75"} Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.721300 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-8lb6x" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.721349 4718 scope.go:117] "RemoveContainer" containerID="f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361" Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.768542 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-8lb6x"] Nov 23 15:02:58 crc kubenswrapper[4718]: I1123 15:02:58.784101 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-8lb6x"] Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.335778 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bppgm" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.388896 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk4p7\" (UniqueName: \"kubernetes.io/projected/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-kube-api-access-rk4p7\") pod \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.389026 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-config\") pod \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.389109 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-combined-ca-bundle\") pod \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\" (UID: \"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58\") " Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.394605 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-kube-api-access-rk4p7" (OuterVolumeSpecName: "kube-api-access-rk4p7") pod "4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" (UID: "4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58"). InnerVolumeSpecName "kube-api-access-rk4p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.417677 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" (UID: "4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.418913 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-config" (OuterVolumeSpecName: "config") pod "4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" (UID: "4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.491943 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.491980 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk4p7\" (UniqueName: \"kubernetes.io/projected/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-kube-api-access-rk4p7\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.491992 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.534383 4718 scope.go:117] "RemoveContainer" containerID="6050772260a85e73dbc998e01d680a6da0078372e31204af4ba0c48b528323e8" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.566039 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.773845 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.775933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23c3b676-60b7-4d06-b284-2be1fd9824c5","Type":"ContainerStarted","Data":"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a"} Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.781407 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bppgm" event={"ID":"4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58","Type":"ContainerDied","Data":"ff5448ff258c01cdc17bf5c7928d4cf486a52232632fc4a0a0acf50e781b745e"} Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.781429 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5448ff258c01cdc17bf5c7928d4cf486a52232632fc4a0a0acf50e781b745e" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.781493 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bppgm" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.834538 4718 generic.go:334] "Generic (PLEG): container finished" podID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerID="de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad" exitCode=0 Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.834577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" event={"ID":"ef450690-daaa-42a6-9b05-74dc308bdf50","Type":"ContainerDied","Data":"de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad"} Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.861606 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-w86hh"] Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.876028 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4h5th"] Nov 23 15:02:59 crc kubenswrapper[4718]: E1123 15:02:59.876418 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" containerName="neutron-db-sync" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.876429 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" containerName="neutron-db-sync" Nov 23 15:02:59 crc kubenswrapper[4718]: E1123 15:02:59.876466 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerName="dnsmasq-dns" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.876474 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerName="dnsmasq-dns" Nov 23 15:02:59 crc kubenswrapper[4718]: E1123 15:02:59.876486 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerName="init" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.876492 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerName="init" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.876635 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" containerName="neutron-db-sync" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.876663 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" containerName="dnsmasq-dns" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.877522 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.897566 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4h5th"] Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.904266 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.904363 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.904391 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-config\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.904425 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.904466 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:02:59 crc kubenswrapper[4718]: I1123 15:02:59.904492 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdmfx\" (UniqueName: \"kubernetes.io/projected/251d390c-d74c-4a3f-9ec4-9f995abc4c24-kube-api-access-jdmfx\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.008986 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.009378 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.009415 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-config\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.009487 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.009512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.009539 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdmfx\" (UniqueName: \"kubernetes.io/projected/251d390c-d74c-4a3f-9ec4-9f995abc4c24-kube-api-access-jdmfx\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.011214 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-config\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.011808 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.011869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.011880 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.012273 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.042210 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57485cc44d-fj2x7"] Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.049820 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57485cc44d-fj2x7"] Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.049924 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.055169 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.055313 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r5hj2" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.055592 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.057704 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.060530 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdmfx\" (UniqueName: \"kubernetes.io/projected/251d390c-d74c-4a3f-9ec4-9f995abc4c24-kube-api-access-jdmfx\") pod \"dnsmasq-dns-5c9776ccc5-4h5th\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.111427 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-ovndb-tls-certs\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.111488 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-config\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.111657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7p6b\" (UniqueName: \"kubernetes.io/projected/e89948ec-25f4-4d02-985b-f9fdd43437d1-kube-api-access-k7p6b\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.111725 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-httpd-config\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.111831 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-combined-ca-bundle\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.213512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-ovndb-tls-certs\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.213576 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-config\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.213632 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7p6b\" (UniqueName: \"kubernetes.io/projected/e89948ec-25f4-4d02-985b-f9fdd43437d1-kube-api-access-k7p6b\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.213662 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-httpd-config\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.214183 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-combined-ca-bundle\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.217411 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.229357 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-httpd-config\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.229392 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-combined-ca-bundle\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.229562 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-ovndb-tls-certs\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.232381 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-config\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.241250 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7p6b\" (UniqueName: \"kubernetes.io/projected/e89948ec-25f4-4d02-985b-f9fdd43437d1-kube-api-access-k7p6b\") pod \"neutron-57485cc44d-fj2x7\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.382641 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.462936 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8a309d-d182-4ad3-9818-a2c47fa25cec" path="/var/lib/kubelet/pods/4c8a309d-d182-4ad3-9818-a2c47fa25cec/volumes" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.588810 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4h5th"] Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.849650 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" event={"ID":"251d390c-d74c-4a3f-9ec4-9f995abc4c24","Type":"ContainerStarted","Data":"d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949"} Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.849696 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" event={"ID":"251d390c-d74c-4a3f-9ec4-9f995abc4c24","Type":"ContainerStarted","Data":"31c94c5ac9bf5c9b735024d09eee350eccb30371d2b594f5e0186d0aa6fab3a2"} Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.856878 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerStarted","Data":"78cff56b90a780414e1fa5e313094c791ef97eb867f7b4d9bf6533570cecdc75"} Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.857941 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.863893 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" event={"ID":"ef450690-daaa-42a6-9b05-74dc308bdf50","Type":"ContainerStarted","Data":"cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6"} Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.864030 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.864026 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" podUID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerName="dnsmasq-dns" containerID="cri-o://cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6" gracePeriod=10 Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.866742 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23c3b676-60b7-4d06-b284-2be1fd9824c5","Type":"ContainerStarted","Data":"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75"} Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.872372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e60da8e4-38fb-49f2-9325-8d8d55109b49","Type":"ContainerStarted","Data":"43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02"} Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.872662 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api-log" containerID="cri-o://f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a" gracePeriod=30 Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.872793 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api" containerID="cri-o://67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75" gracePeriod=30 Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.872814 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.895871 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.28633899 podStartE2EDuration="7.895855761s" podCreationTimestamp="2025-11-23 15:02:53 +0000 UTC" firstStartedPulling="2025-11-23 15:02:55.121108807 +0000 UTC m=+1026.360728661" lastFinishedPulling="2025-11-23 15:02:59.730625588 +0000 UTC m=+1030.970245432" observedRunningTime="2025-11-23 15:03:00.890517302 +0000 UTC m=+1032.130137156" watchObservedRunningTime="2025-11-23 15:03:00.895855761 +0000 UTC m=+1032.135475605" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.910682 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" podStartSLOduration=4.910664802 podStartE2EDuration="4.910664802s" podCreationTimestamp="2025-11-23 15:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:00.907032171 +0000 UTC m=+1032.146652015" watchObservedRunningTime="2025-11-23 15:03:00.910664802 +0000 UTC m=+1032.150284646" Nov 23 15:03:00 crc kubenswrapper[4718]: I1123 15:03:00.935940 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.935917574 podStartE2EDuration="4.935917574s" podCreationTimestamp="2025-11-23 15:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:00.934301869 +0000 UTC m=+1032.173921713" watchObservedRunningTime="2025-11-23 15:03:00.935917574 +0000 UTC m=+1032.175537428" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.043607 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57485cc44d-fj2x7"] Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.474290 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.618313 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.650926 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659486 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f77xx\" (UniqueName: \"kubernetes.io/projected/23c3b676-60b7-4d06-b284-2be1fd9824c5-kube-api-access-f77xx\") pod \"23c3b676-60b7-4d06-b284-2be1fd9824c5\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659562 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-combined-ca-bundle\") pod \"23c3b676-60b7-4d06-b284-2be1fd9824c5\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659648 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-swift-storage-0\") pod \"ef450690-daaa-42a6-9b05-74dc308bdf50\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659695 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-nb\") pod \"ef450690-daaa-42a6-9b05-74dc308bdf50\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659728 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-sb\") pod \"ef450690-daaa-42a6-9b05-74dc308bdf50\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659833 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-config\") pod \"ef450690-daaa-42a6-9b05-74dc308bdf50\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659857 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/ef450690-daaa-42a6-9b05-74dc308bdf50-kube-api-access-tfm8k\") pod \"ef450690-daaa-42a6-9b05-74dc308bdf50\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.659979 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data\") pod \"23c3b676-60b7-4d06-b284-2be1fd9824c5\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.660028 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-svc\") pod \"ef450690-daaa-42a6-9b05-74dc308bdf50\" (UID: \"ef450690-daaa-42a6-9b05-74dc308bdf50\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.660086 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23c3b676-60b7-4d06-b284-2be1fd9824c5-etc-machine-id\") pod \"23c3b676-60b7-4d06-b284-2be1fd9824c5\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.660167 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data-custom\") pod \"23c3b676-60b7-4d06-b284-2be1fd9824c5\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.660324 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c3b676-60b7-4d06-b284-2be1fd9824c5-logs\") pod \"23c3b676-60b7-4d06-b284-2be1fd9824c5\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.660347 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-scripts\") pod \"23c3b676-60b7-4d06-b284-2be1fd9824c5\" (UID: \"23c3b676-60b7-4d06-b284-2be1fd9824c5\") " Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.669073 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-scripts" (OuterVolumeSpecName: "scripts") pod "23c3b676-60b7-4d06-b284-2be1fd9824c5" (UID: "23c3b676-60b7-4d06-b284-2be1fd9824c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.669335 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c3b676-60b7-4d06-b284-2be1fd9824c5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "23c3b676-60b7-4d06-b284-2be1fd9824c5" (UID: "23c3b676-60b7-4d06-b284-2be1fd9824c5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.678262 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c3b676-60b7-4d06-b284-2be1fd9824c5-kube-api-access-f77xx" (OuterVolumeSpecName: "kube-api-access-f77xx") pod "23c3b676-60b7-4d06-b284-2be1fd9824c5" (UID: "23c3b676-60b7-4d06-b284-2be1fd9824c5"). InnerVolumeSpecName "kube-api-access-f77xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.679345 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23c3b676-60b7-4d06-b284-2be1fd9824c5-logs" (OuterVolumeSpecName: "logs") pod "23c3b676-60b7-4d06-b284-2be1fd9824c5" (UID: "23c3b676-60b7-4d06-b284-2be1fd9824c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.712086 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "23c3b676-60b7-4d06-b284-2be1fd9824c5" (UID: "23c3b676-60b7-4d06-b284-2be1fd9824c5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.713735 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef450690-daaa-42a6-9b05-74dc308bdf50-kube-api-access-tfm8k" (OuterVolumeSpecName: "kube-api-access-tfm8k") pod "ef450690-daaa-42a6-9b05-74dc308bdf50" (UID: "ef450690-daaa-42a6-9b05-74dc308bdf50"). InnerVolumeSpecName "kube-api-access-tfm8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.755094 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23c3b676-60b7-4d06-b284-2be1fd9824c5" (UID: "23c3b676-60b7-4d06-b284-2be1fd9824c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.762944 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c3b676-60b7-4d06-b284-2be1fd9824c5-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.762972 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.762981 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f77xx\" (UniqueName: \"kubernetes.io/projected/23c3b676-60b7-4d06-b284-2be1fd9824c5-kube-api-access-f77xx\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.762991 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.763000 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/ef450690-daaa-42a6-9b05-74dc308bdf50-kube-api-access-tfm8k\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.763010 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23c3b676-60b7-4d06-b284-2be1fd9824c5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.763021 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.786111 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef450690-daaa-42a6-9b05-74dc308bdf50" (UID: "ef450690-daaa-42a6-9b05-74dc308bdf50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.864546 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.892231 4718 generic.go:334] "Generic (PLEG): container finished" podID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerID="cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6" exitCode=0 Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.892320 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" event={"ID":"ef450690-daaa-42a6-9b05-74dc308bdf50","Type":"ContainerDied","Data":"cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.892349 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" event={"ID":"ef450690-daaa-42a6-9b05-74dc308bdf50","Type":"ContainerDied","Data":"7726eb7428a5265f67f6d8d9282f512dec27e2a09e97b3b6021eb0871bcdbc8b"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.892365 4718 scope.go:117] "RemoveContainer" containerID="cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.892496 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-w86hh" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.908769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data" (OuterVolumeSpecName: "config-data") pod "23c3b676-60b7-4d06-b284-2be1fd9824c5" (UID: "23c3b676-60b7-4d06-b284-2be1fd9824c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.912866 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-config" (OuterVolumeSpecName: "config") pod "ef450690-daaa-42a6-9b05-74dc308bdf50" (UID: "ef450690-daaa-42a6-9b05-74dc308bdf50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.916959 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef450690-daaa-42a6-9b05-74dc308bdf50" (UID: "ef450690-daaa-42a6-9b05-74dc308bdf50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.917419 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef450690-daaa-42a6-9b05-74dc308bdf50" (UID: "ef450690-daaa-42a6-9b05-74dc308bdf50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.918690 4718 generic.go:334] "Generic (PLEG): container finished" podID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerID="67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75" exitCode=0 Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.918726 4718 generic.go:334] "Generic (PLEG): container finished" podID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerID="f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a" exitCode=143 Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.918807 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23c3b676-60b7-4d06-b284-2be1fd9824c5","Type":"ContainerDied","Data":"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.918844 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23c3b676-60b7-4d06-b284-2be1fd9824c5","Type":"ContainerDied","Data":"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.918859 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23c3b676-60b7-4d06-b284-2be1fd9824c5","Type":"ContainerDied","Data":"81e59305d68468a67b6d12e331e95d8b330e5afc6bf805187d8413fe39c18b4d"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.918928 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.930389 4718 generic.go:334] "Generic (PLEG): container finished" podID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerID="d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949" exitCode=0 Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.930507 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" event={"ID":"251d390c-d74c-4a3f-9ec4-9f995abc4c24","Type":"ContainerDied","Data":"d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.941786 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57485cc44d-fj2x7" event={"ID":"e89948ec-25f4-4d02-985b-f9fdd43437d1","Type":"ContainerStarted","Data":"027f3f200ee28e1086fe86cf8460915abc92236677554634a41fa89d256cea22"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.941848 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57485cc44d-fj2x7" event={"ID":"e89948ec-25f4-4d02-985b-f9fdd43437d1","Type":"ContainerStarted","Data":"7bed19f2044f7d120dd3130ea58adc866be8a93f37516a87c2385afd0d66d9c7"} Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.952887 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef450690-daaa-42a6-9b05-74dc308bdf50" (UID: "ef450690-daaa-42a6-9b05-74dc308bdf50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.957538 4718 scope.go:117] "RemoveContainer" containerID="de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.968622 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.968807 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.968869 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.968920 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c3b676-60b7-4d06-b284-2be1fd9824c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:01 crc kubenswrapper[4718]: I1123 15:03:01.968968 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef450690-daaa-42a6-9b05-74dc308bdf50-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.008495 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.025866 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.033843 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.034242 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.034259 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api" Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.034279 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerName="dnsmasq-dns" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.034286 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerName="dnsmasq-dns" Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.034296 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerName="init" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.034302 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerName="init" Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.034317 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api-log" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.034323 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api-log" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.034490 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef450690-daaa-42a6-9b05-74dc308bdf50" containerName="dnsmasq-dns" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.034509 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api-log" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.034526 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" containerName="cinder-api" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.035668 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.044318 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.044637 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.050598 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.062744 4718 scope.go:117] "RemoveContainer" containerID="cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6" Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.063088 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6\": container with ID starting with cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6 not found: ID does not exist" containerID="cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.063122 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6"} err="failed to get container status \"cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6\": rpc error: code = NotFound desc = could not find container \"cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6\": container with ID starting with cfbe398ef321e2f930e95259f15242a63ee82c1dddf1392c5806ad8768766fe6 not found: ID does not exist" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.063145 4718 scope.go:117] "RemoveContainer" containerID="de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad" Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.063341 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad\": container with ID starting with de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad not found: ID does not exist" containerID="de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.063365 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad"} err="failed to get container status \"de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad\": rpc error: code = NotFound desc = could not find container \"de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad\": container with ID starting with de23edc6e57af72c653bbdb88c73900352cb2170bb316d749cc52e9f70845cad not found: ID does not exist" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.063379 4718 scope.go:117] "RemoveContainer" containerID="67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076239 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-config-data\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076494 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076616 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a35d79c-43f8-4fbb-822d-d4b42a332068-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076645 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-scripts\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076718 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzjdq\" (UniqueName: \"kubernetes.io/projected/3a35d79c-43f8-4fbb-822d-d4b42a332068-kube-api-access-pzjdq\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076765 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076806 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076939 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.076990 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a35d79c-43f8-4fbb-822d-d4b42a332068-logs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.077843 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.154610 4718 scope.go:117] "RemoveContainer" containerID="f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178628 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178696 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a35d79c-43f8-4fbb-822d-d4b42a332068-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178714 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-scripts\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178742 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzjdq\" (UniqueName: \"kubernetes.io/projected/3a35d79c-43f8-4fbb-822d-d4b42a332068-kube-api-access-pzjdq\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178777 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178815 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178846 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a35d79c-43f8-4fbb-822d-d4b42a332068-logs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.178890 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-config-data\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.183554 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a35d79c-43f8-4fbb-822d-d4b42a332068-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.184105 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a35d79c-43f8-4fbb-822d-d4b42a332068-logs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.184196 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.185207 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-config-data\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.190649 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.191599 4718 scope.go:117] "RemoveContainer" containerID="67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75" Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.195598 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75\": container with ID starting with 67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75 not found: ID does not exist" containerID="67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.195652 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75"} err="failed to get container status \"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75\": rpc error: code = NotFound desc = could not find container \"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75\": container with ID starting with 67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75 not found: ID does not exist" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.195685 4718 scope.go:117] "RemoveContainer" containerID="f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.205934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-scripts\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.206027 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: E1123 15:03:02.206149 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a\": container with ID starting with f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a not found: ID does not exist" containerID="f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.206252 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a"} err="failed to get container status \"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a\": rpc error: code = NotFound desc = could not find container \"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a\": container with ID starting with f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a not found: ID does not exist" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.206348 4718 scope.go:117] "RemoveContainer" containerID="67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.206424 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzjdq\" (UniqueName: \"kubernetes.io/projected/3a35d79c-43f8-4fbb-822d-d4b42a332068-kube-api-access-pzjdq\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.206388 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a35d79c-43f8-4fbb-822d-d4b42a332068-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3a35d79c-43f8-4fbb-822d-d4b42a332068\") " pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.208276 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75"} err="failed to get container status \"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75\": rpc error: code = NotFound desc = could not find container \"67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75\": container with ID starting with 67a91e908ce878be77ccf324ad16836fb13d361531dea453141449326b88ec75 not found: ID does not exist" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.208331 4718 scope.go:117] "RemoveContainer" containerID="f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.210893 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a"} err="failed to get container status \"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a\": rpc error: code = NotFound desc = could not find container \"f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a\": container with ID starting with f8e579249b2487677f59ecf0c0807df241d0db66bf3c27d5e9d18e8b7d1fc35a not found: ID does not exist" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.349663 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-w86hh"] Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.365489 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-w86hh"] Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.389726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.474410 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c3b676-60b7-4d06-b284-2be1fd9824c5" path="/var/lib/kubelet/pods/23c3b676-60b7-4d06-b284-2be1fd9824c5/volumes" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.475545 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef450690-daaa-42a6-9b05-74dc308bdf50" path="/var/lib/kubelet/pods/ef450690-daaa-42a6-9b05-74dc308bdf50/volumes" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.895013 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.957713 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" event={"ID":"251d390c-d74c-4a3f-9ec4-9f995abc4c24","Type":"ContainerStarted","Data":"1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62"} Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.957956 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.962776 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57485cc44d-fj2x7" event={"ID":"e89948ec-25f4-4d02-985b-f9fdd43437d1","Type":"ContainerStarted","Data":"bae1c5833127e59611ac73e1a9bce7b907b86a6fd777fae325f12b7f9860b496"} Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.969019 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a35d79c-43f8-4fbb-822d-d4b42a332068","Type":"ContainerStarted","Data":"6ec1b0ae38e695eb2ea094e618a7b651c8cc44915de2142cc8b5f2ac2a37d9b3"} Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.970944 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.978223 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e60da8e4-38fb-49f2-9325-8d8d55109b49","Type":"ContainerStarted","Data":"3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87"} Nov 23 15:03:02 crc kubenswrapper[4718]: I1123 15:03:02.987698 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" podStartSLOduration=3.987680111 podStartE2EDuration="3.987680111s" podCreationTimestamp="2025-11-23 15:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:02.979047231 +0000 UTC m=+1034.218667075" watchObservedRunningTime="2025-11-23 15:03:02.987680111 +0000 UTC m=+1034.227299955" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.003586 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57485cc44d-fj2x7" podStartSLOduration=4.003567053 podStartE2EDuration="4.003567053s" podCreationTimestamp="2025-11-23 15:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:03.001497615 +0000 UTC m=+1034.241117469" watchObservedRunningTime="2025-11-23 15:03:03.003567053 +0000 UTC m=+1034.243186897" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.031002 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.093019571 podStartE2EDuration="7.030986685s" podCreationTimestamp="2025-11-23 15:02:56 +0000 UTC" firstStartedPulling="2025-11-23 15:02:57.791364808 +0000 UTC m=+1029.030984652" lastFinishedPulling="2025-11-23 15:02:59.729331922 +0000 UTC m=+1030.968951766" observedRunningTime="2025-11-23 15:03:03.026869101 +0000 UTC m=+1034.266488945" watchObservedRunningTime="2025-11-23 15:03:03.030986685 +0000 UTC m=+1034.270606529" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.264997 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6566df567c-72brl"] Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.274660 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.282315 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6566df567c-72brl"] Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.284599 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.284790 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.305929 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-internal-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.306055 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-httpd-config\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.306085 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-ovndb-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.306114 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jmw\" (UniqueName: \"kubernetes.io/projected/14545901-b770-4d66-8692-51937e97d24a-kube-api-access-s5jmw\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.306190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-config\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.306301 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-combined-ca-bundle\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.306345 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-public-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.408968 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-combined-ca-bundle\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.409160 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-public-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.409191 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-internal-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.409321 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-httpd-config\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.409342 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-ovndb-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.409361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jmw\" (UniqueName: \"kubernetes.io/projected/14545901-b770-4d66-8692-51937e97d24a-kube-api-access-s5jmw\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.409415 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-config\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.420169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-httpd-config\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.421226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-internal-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.421263 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-public-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.434296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-ovndb-tls-certs\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.435959 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-combined-ca-bundle\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.436023 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jmw\" (UniqueName: \"kubernetes.io/projected/14545901-b770-4d66-8692-51937e97d24a-kube-api-access-s5jmw\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.443967 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/14545901-b770-4d66-8692-51937e97d24a-config\") pod \"neutron-6566df567c-72brl\" (UID: \"14545901-b770-4d66-8692-51937e97d24a\") " pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.603693 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:03 crc kubenswrapper[4718]: I1123 15:03:03.989306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a35d79c-43f8-4fbb-822d-d4b42a332068","Type":"ContainerStarted","Data":"023290dbd5e3674a821b2d9fcd711c8bc812dc082217928085d27add3821ef46"} Nov 23 15:03:04 crc kubenswrapper[4718]: I1123 15:03:04.192937 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6566df567c-72brl"] Nov 23 15:03:04 crc kubenswrapper[4718]: I1123 15:03:04.352344 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.010107 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6566df567c-72brl" event={"ID":"14545901-b770-4d66-8692-51937e97d24a","Type":"ContainerStarted","Data":"8946cae23b21415eb0477d41a9e240af90e197bf75bc31c6ff5d7073444fa1f3"} Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.010403 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6566df567c-72brl" event={"ID":"14545901-b770-4d66-8692-51937e97d24a","Type":"ContainerStarted","Data":"295d44476f78cbc30560354e194e1ae731952843177a259373f465f0066a482e"} Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.010414 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6566df567c-72brl" event={"ID":"14545901-b770-4d66-8692-51937e97d24a","Type":"ContainerStarted","Data":"a8c15599b7973f763247cef9b6029d5e28a2d27142554b75a852c345703bdd5b"} Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.010465 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.013534 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a35d79c-43f8-4fbb-822d-d4b42a332068","Type":"ContainerStarted","Data":"e2b6ef59461072f75b3367eba36fd27fd395df84f8c9f48e27e0e72e37f7915d"} Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.013587 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.038371 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6566df567c-72brl" podStartSLOduration=2.038350414 podStartE2EDuration="2.038350414s" podCreationTimestamp="2025-11-23 15:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:05.026644915 +0000 UTC m=+1036.266264769" watchObservedRunningTime="2025-11-23 15:03:05.038350414 +0000 UTC m=+1036.277970258" Nov 23 15:03:05 crc kubenswrapper[4718]: I1123 15:03:05.059266 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.059244164 podStartE2EDuration="4.059244164s" podCreationTimestamp="2025-11-23 15:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:05.052791169 +0000 UTC m=+1036.292411023" watchObservedRunningTime="2025-11-23 15:03:05.059244164 +0000 UTC m=+1036.298864008" Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.045959 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.067767 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c6d99969d-lw2d7" Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.075029 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.141783 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-677cc89bb-l2pwx"] Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.142050 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-677cc89bb-l2pwx" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api-log" containerID="cri-o://19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5" gracePeriod=30 Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.142135 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-677cc89bb-l2pwx" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api" containerID="cri-o://801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d" gracePeriod=30 Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.146965 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-677cc89bb-l2pwx" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": EOF" Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.147148 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-677cc89bb-l2pwx" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": EOF" Nov 23 15:03:07 crc kubenswrapper[4718]: I1123 15:03:07.385783 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 15:03:08 crc kubenswrapper[4718]: I1123 15:03:08.037308 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f563293-98ea-4cd4-9067-95a64333591d" containerID="19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5" exitCode=143 Nov 23 15:03:08 crc kubenswrapper[4718]: I1123 15:03:08.037412 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677cc89bb-l2pwx" event={"ID":"0f563293-98ea-4cd4-9067-95a64333591d","Type":"ContainerDied","Data":"19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5"} Nov 23 15:03:08 crc kubenswrapper[4718]: I1123 15:03:08.090089 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:03:09 crc kubenswrapper[4718]: I1123 15:03:09.045126 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="cinder-scheduler" containerID="cri-o://43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02" gracePeriod=30 Nov 23 15:03:09 crc kubenswrapper[4718]: I1123 15:03:09.045195 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="probe" containerID="cri-o://3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87" gracePeriod=30 Nov 23 15:03:10 crc kubenswrapper[4718]: I1123 15:03:10.059151 4718 generic.go:334] "Generic (PLEG): container finished" podID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerID="3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87" exitCode=0 Nov 23 15:03:10 crc kubenswrapper[4718]: I1123 15:03:10.059243 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e60da8e4-38fb-49f2-9325-8d8d55109b49","Type":"ContainerDied","Data":"3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87"} Nov 23 15:03:10 crc kubenswrapper[4718]: I1123 15:03:10.219619 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:03:10 crc kubenswrapper[4718]: I1123 15:03:10.293579 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ztlxn"] Nov 23 15:03:10 crc kubenswrapper[4718]: I1123 15:03:10.293824 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" podUID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerName="dnsmasq-dns" containerID="cri-o://f77f96e3ff0fa2bc124bbe3084a2d4bf537ba0b3cb13639ceb07450e0e14bf38" gracePeriod=10 Nov 23 15:03:10 crc kubenswrapper[4718]: I1123 15:03:10.927855 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.053624 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data-custom\") pod \"0f563293-98ea-4cd4-9067-95a64333591d\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.053833 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8llk\" (UniqueName: \"kubernetes.io/projected/0f563293-98ea-4cd4-9067-95a64333591d-kube-api-access-r8llk\") pod \"0f563293-98ea-4cd4-9067-95a64333591d\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.053877 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data\") pod \"0f563293-98ea-4cd4-9067-95a64333591d\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.053968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f563293-98ea-4cd4-9067-95a64333591d-logs\") pod \"0f563293-98ea-4cd4-9067-95a64333591d\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.054005 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-combined-ca-bundle\") pod \"0f563293-98ea-4cd4-9067-95a64333591d\" (UID: \"0f563293-98ea-4cd4-9067-95a64333591d\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.054383 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f563293-98ea-4cd4-9067-95a64333591d-logs" (OuterVolumeSpecName: "logs") pod "0f563293-98ea-4cd4-9067-95a64333591d" (UID: "0f563293-98ea-4cd4-9067-95a64333591d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.059194 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f563293-98ea-4cd4-9067-95a64333591d-kube-api-access-r8llk" (OuterVolumeSpecName: "kube-api-access-r8llk") pod "0f563293-98ea-4cd4-9067-95a64333591d" (UID: "0f563293-98ea-4cd4-9067-95a64333591d"). InnerVolumeSpecName "kube-api-access-r8llk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.071195 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f563293-98ea-4cd4-9067-95a64333591d" containerID="801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d" exitCode=0 Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.071250 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677cc89bb-l2pwx" event={"ID":"0f563293-98ea-4cd4-9067-95a64333591d","Type":"ContainerDied","Data":"801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d"} Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.071277 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677cc89bb-l2pwx" event={"ID":"0f563293-98ea-4cd4-9067-95a64333591d","Type":"ContainerDied","Data":"d1dee27b40446b32d8439280124f70215db5951dd90790640802867a91514afa"} Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.071293 4718 scope.go:117] "RemoveContainer" containerID="801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.071392 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677cc89bb-l2pwx" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.082963 4718 generic.go:334] "Generic (PLEG): container finished" podID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerID="f77f96e3ff0fa2bc124bbe3084a2d4bf537ba0b3cb13639ceb07450e0e14bf38" exitCode=0 Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.083015 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" event={"ID":"28f71e41-ae1d-43b8-a822-c6a56b5e8119","Type":"ContainerDied","Data":"f77f96e3ff0fa2bc124bbe3084a2d4bf537ba0b3cb13639ceb07450e0e14bf38"} Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.095985 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f563293-98ea-4cd4-9067-95a64333591d" (UID: "0f563293-98ea-4cd4-9067-95a64333591d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.107604 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f563293-98ea-4cd4-9067-95a64333591d" (UID: "0f563293-98ea-4cd4-9067-95a64333591d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.153741 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data" (OuterVolumeSpecName: "config-data") pod "0f563293-98ea-4cd4-9067-95a64333591d" (UID: "0f563293-98ea-4cd4-9067-95a64333591d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.155742 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8llk\" (UniqueName: \"kubernetes.io/projected/0f563293-98ea-4cd4-9067-95a64333591d-kube-api-access-r8llk\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.155766 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.155775 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f563293-98ea-4cd4-9067-95a64333591d-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.155784 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.155792 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f563293-98ea-4cd4-9067-95a64333591d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.221390 4718 scope.go:117] "RemoveContainer" containerID="19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.238401 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.243733 4718 scope.go:117] "RemoveContainer" containerID="801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d" Nov 23 15:03:11 crc kubenswrapper[4718]: E1123 15:03:11.246069 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d\": container with ID starting with 801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d not found: ID does not exist" containerID="801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.246118 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d"} err="failed to get container status \"801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d\": rpc error: code = NotFound desc = could not find container \"801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d\": container with ID starting with 801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d not found: ID does not exist" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.246146 4718 scope.go:117] "RemoveContainer" containerID="19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5" Nov 23 15:03:11 crc kubenswrapper[4718]: E1123 15:03:11.246967 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5\": container with ID starting with 19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5 not found: ID does not exist" containerID="19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.247005 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5"} err="failed to get container status \"19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5\": rpc error: code = NotFound desc = could not find container \"19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5\": container with ID starting with 19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5 not found: ID does not exist" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.368130 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-svc\") pod \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.368207 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-nb\") pod \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.368231 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-swift-storage-0\") pod \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.368376 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-config\") pod \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.368428 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-sb\") pod \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.368474 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mdwb\" (UniqueName: \"kubernetes.io/projected/28f71e41-ae1d-43b8-a822-c6a56b5e8119-kube-api-access-2mdwb\") pod \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\" (UID: \"28f71e41-ae1d-43b8-a822-c6a56b5e8119\") " Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.372514 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f71e41-ae1d-43b8-a822-c6a56b5e8119-kube-api-access-2mdwb" (OuterVolumeSpecName: "kube-api-access-2mdwb") pod "28f71e41-ae1d-43b8-a822-c6a56b5e8119" (UID: "28f71e41-ae1d-43b8-a822-c6a56b5e8119"). InnerVolumeSpecName "kube-api-access-2mdwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.424875 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28f71e41-ae1d-43b8-a822-c6a56b5e8119" (UID: "28f71e41-ae1d-43b8-a822-c6a56b5e8119"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.427512 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-677cc89bb-l2pwx"] Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.435182 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28f71e41-ae1d-43b8-a822-c6a56b5e8119" (UID: "28f71e41-ae1d-43b8-a822-c6a56b5e8119"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.435777 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-677cc89bb-l2pwx"] Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.447129 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28f71e41-ae1d-43b8-a822-c6a56b5e8119" (UID: "28f71e41-ae1d-43b8-a822-c6a56b5e8119"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.447708 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28f71e41-ae1d-43b8-a822-c6a56b5e8119" (UID: "28f71e41-ae1d-43b8-a822-c6a56b5e8119"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.456401 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-config" (OuterVolumeSpecName: "config") pod "28f71e41-ae1d-43b8-a822-c6a56b5e8119" (UID: "28f71e41-ae1d-43b8-a822-c6a56b5e8119"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.470387 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.470455 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.470471 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mdwb\" (UniqueName: \"kubernetes.io/projected/28f71e41-ae1d-43b8-a822-c6a56b5e8119-kube-api-access-2mdwb\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.470482 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.470493 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:11 crc kubenswrapper[4718]: I1123 15:03:11.470505 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28f71e41-ae1d-43b8-a822-c6a56b5e8119-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.031123 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-844cdbd5f8-ptmjk" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.106687 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" event={"ID":"28f71e41-ae1d-43b8-a822-c6a56b5e8119","Type":"ContainerDied","Data":"c5942fd04c9dfd680ae7bb779538cb1d7a1af5beb26142d749e4c4ce10af1db9"} Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.106737 4718 scope.go:117] "RemoveContainer" containerID="f77f96e3ff0fa2bc124bbe3084a2d4bf537ba0b3cb13639ceb07450e0e14bf38" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.106731 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ztlxn" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.144356 4718 scope.go:117] "RemoveContainer" containerID="6020b9b585aece875223f4bab3dee5447f27cdb7903030d05549d4860d545f6c" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.150258 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ztlxn"] Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.157874 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ztlxn"] Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.457025 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f563293-98ea-4cd4-9067-95a64333591d" path="/var/lib/kubelet/pods/0f563293-98ea-4cd4-9067-95a64333591d/volumes" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.458298 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" path="/var/lib/kubelet/pods/28f71e41-ae1d-43b8-a822-c6a56b5e8119/volumes" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.597864 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 15:03:12 crc kubenswrapper[4718]: E1123 15:03:12.598255 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.598272 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api" Nov 23 15:03:12 crc kubenswrapper[4718]: E1123 15:03:12.598285 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerName="dnsmasq-dns" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.598292 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerName="dnsmasq-dns" Nov 23 15:03:12 crc kubenswrapper[4718]: E1123 15:03:12.598302 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerName="init" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.598308 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerName="init" Nov 23 15:03:12 crc kubenswrapper[4718]: E1123 15:03:12.598321 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api-log" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.598327 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api-log" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.598589 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.598607 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f71e41-ae1d-43b8-a822-c6a56b5e8119" containerName="dnsmasq-dns" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.598628 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f563293-98ea-4cd4-9067-95a64333591d" containerName="barbican-api-log" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.599280 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.601637 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.602725 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-r6b6h" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.602868 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.610145 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.710072 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2720257-17da-4635-bd8c-2d65b9e8b9f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.710159 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2720257-17da-4635-bd8c-2d65b9e8b9f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.710277 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnxg\" (UniqueName: \"kubernetes.io/projected/a2720257-17da-4635-bd8c-2d65b9e8b9f0-kube-api-access-lnnxg\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.710312 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2720257-17da-4635-bd8c-2d65b9e8b9f0-openstack-config\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.746083 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.811677 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data-custom\") pod \"e60da8e4-38fb-49f2-9325-8d8d55109b49\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.812853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-combined-ca-bundle\") pod \"e60da8e4-38fb-49f2-9325-8d8d55109b49\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.812954 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data\") pod \"e60da8e4-38fb-49f2-9325-8d8d55109b49\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.813024 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-scripts\") pod \"e60da8e4-38fb-49f2-9325-8d8d55109b49\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.813282 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60da8e4-38fb-49f2-9325-8d8d55109b49-etc-machine-id\") pod \"e60da8e4-38fb-49f2-9325-8d8d55109b49\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.813320 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlmph\" (UniqueName: \"kubernetes.io/projected/e60da8e4-38fb-49f2-9325-8d8d55109b49-kube-api-access-rlmph\") pod \"e60da8e4-38fb-49f2-9325-8d8d55109b49\" (UID: \"e60da8e4-38fb-49f2-9325-8d8d55109b49\") " Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.813623 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnxg\" (UniqueName: \"kubernetes.io/projected/a2720257-17da-4635-bd8c-2d65b9e8b9f0-kube-api-access-lnnxg\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.813662 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2720257-17da-4635-bd8c-2d65b9e8b9f0-openstack-config\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.813723 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2720257-17da-4635-bd8c-2d65b9e8b9f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.813776 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2720257-17da-4635-bd8c-2d65b9e8b9f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.814559 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60da8e4-38fb-49f2-9325-8d8d55109b49-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e60da8e4-38fb-49f2-9325-8d8d55109b49" (UID: "e60da8e4-38fb-49f2-9325-8d8d55109b49"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.821678 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2720257-17da-4635-bd8c-2d65b9e8b9f0-openstack-config\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.828666 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-scripts" (OuterVolumeSpecName: "scripts") pod "e60da8e4-38fb-49f2-9325-8d8d55109b49" (UID: "e60da8e4-38fb-49f2-9325-8d8d55109b49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.830654 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60da8e4-38fb-49f2-9325-8d8d55109b49-kube-api-access-rlmph" (OuterVolumeSpecName: "kube-api-access-rlmph") pod "e60da8e4-38fb-49f2-9325-8d8d55109b49" (UID: "e60da8e4-38fb-49f2-9325-8d8d55109b49"). InnerVolumeSpecName "kube-api-access-rlmph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.831189 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e60da8e4-38fb-49f2-9325-8d8d55109b49" (UID: "e60da8e4-38fb-49f2-9325-8d8d55109b49"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.831629 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2720257-17da-4635-bd8c-2d65b9e8b9f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.841728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2720257-17da-4635-bd8c-2d65b9e8b9f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.844654 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnxg\" (UniqueName: \"kubernetes.io/projected/a2720257-17da-4635-bd8c-2d65b9e8b9f0-kube-api-access-lnnxg\") pod \"openstackclient\" (UID: \"a2720257-17da-4635-bd8c-2d65b9e8b9f0\") " pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.895987 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e60da8e4-38fb-49f2-9325-8d8d55109b49" (UID: "e60da8e4-38fb-49f2-9325-8d8d55109b49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.917551 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.917592 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.917601 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.917611 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60da8e4-38fb-49f2-9325-8d8d55109b49-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.917619 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlmph\" (UniqueName: \"kubernetes.io/projected/e60da8e4-38fb-49f2-9325-8d8d55109b49-kube-api-access-rlmph\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.923638 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 15:03:12 crc kubenswrapper[4718]: I1123 15:03:12.942215 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data" (OuterVolumeSpecName: "config-data") pod "e60da8e4-38fb-49f2-9325-8d8d55109b49" (UID: "e60da8e4-38fb-49f2-9325-8d8d55109b49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.019295 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60da8e4-38fb-49f2-9325-8d8d55109b49-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.121741 4718 generic.go:334] "Generic (PLEG): container finished" podID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerID="43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02" exitCode=0 Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.122030 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.122042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e60da8e4-38fb-49f2-9325-8d8d55109b49","Type":"ContainerDied","Data":"43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02"} Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.123071 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e60da8e4-38fb-49f2-9325-8d8d55109b49","Type":"ContainerDied","Data":"0ac9efd74ca14e5a0a6713309fe1ee23feafcd9c6b150dd457cabc75188ad3fb"} Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.123147 4718 scope.go:117] "RemoveContainer" containerID="3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.185624 4718 scope.go:117] "RemoveContainer" containerID="43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.194563 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.204145 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.238267 4718 scope.go:117] "RemoveContainer" containerID="3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.238470 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:03:13 crc kubenswrapper[4718]: E1123 15:03:13.238984 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="probe" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.239363 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="probe" Nov 23 15:03:13 crc kubenswrapper[4718]: E1123 15:03:13.240402 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="cinder-scheduler" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.240513 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="cinder-scheduler" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.241536 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="probe" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.241658 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" containerName="cinder-scheduler" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.242635 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: E1123 15:03:13.242680 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87\": container with ID starting with 3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87 not found: ID does not exist" containerID="3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.244419 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87"} err="failed to get container status \"3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87\": rpc error: code = NotFound desc = could not find container \"3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87\": container with ID starting with 3bda54d7c2cc0a70c59159519d8f1e0dee8892dc7662316c5175cbc5b92bbc87 not found: ID does not exist" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.244547 4718 scope.go:117] "RemoveContainer" containerID="43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02" Nov 23 15:03:13 crc kubenswrapper[4718]: E1123 15:03:13.246766 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02\": container with ID starting with 43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02 not found: ID does not exist" containerID="43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.246796 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02"} err="failed to get container status \"43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02\": rpc error: code = NotFound desc = could not find container \"43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02\": container with ID starting with 43e16240891ff330e4256ca3bf9dad447c128bf90febf76a360d3e9ec87d1a02 not found: ID does not exist" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.247807 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.251825 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.322276 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.322319 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.322401 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.322469 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.322500 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hppdv\" (UniqueName: \"kubernetes.io/projected/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-kube-api-access-hppdv\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.322529 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.424569 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.424641 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hppdv\" (UniqueName: \"kubernetes.io/projected/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-kube-api-access-hppdv\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.424687 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.424731 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.424754 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.424800 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.425070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.429385 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.429481 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.431093 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.440724 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.443029 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hppdv\" (UniqueName: \"kubernetes.io/projected/1ac027eb-4d7f-4d21-8689-9ed48cd5b35b-kube-api-access-hppdv\") pod \"cinder-scheduler-0\" (UID: \"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b\") " pod="openstack/cinder-scheduler-0" Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.499177 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 15:03:13 crc kubenswrapper[4718]: I1123 15:03:13.579606 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 15:03:14 crc kubenswrapper[4718]: I1123 15:03:14.028105 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 15:03:14 crc kubenswrapper[4718]: I1123 15:03:14.141639 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a2720257-17da-4635-bd8c-2d65b9e8b9f0","Type":"ContainerStarted","Data":"91d217746a163f3e46a1886828f08a595b6dda7a721826a4ecd9cbb8f2a8db44"} Nov 23 15:03:14 crc kubenswrapper[4718]: I1123 15:03:14.142931 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b","Type":"ContainerStarted","Data":"77a668488a1fbbaed7610ceaf92ed3f30acc932487e9f173b18762aa078bd716"} Nov 23 15:03:14 crc kubenswrapper[4718]: I1123 15:03:14.352121 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5696759568-pxlzs" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 23 15:03:14 crc kubenswrapper[4718]: I1123 15:03:14.453512 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60da8e4-38fb-49f2-9325-8d8d55109b49" path="/var/lib/kubelet/pods/e60da8e4-38fb-49f2-9325-8d8d55109b49/volumes" Nov 23 15:03:14 crc kubenswrapper[4718]: I1123 15:03:14.454540 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 15:03:15 crc kubenswrapper[4718]: I1123 15:03:15.157167 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b","Type":"ContainerStarted","Data":"9b02b7b7adb3b6c026b5790f165ed02121c938a68145cfd21bd7a35796405189"} Nov 23 15:03:16 crc kubenswrapper[4718]: I1123 15:03:16.175315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ac027eb-4d7f-4d21-8689-9ed48cd5b35b","Type":"ContainerStarted","Data":"105ddf91258b37a7bf31c9a1aed36b8530344018fd2b242eb150316dae34a8f4"} Nov 23 15:03:16 crc kubenswrapper[4718]: I1123 15:03:16.201019 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.200985356 podStartE2EDuration="3.200985356s" podCreationTimestamp="2025-11-23 15:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:16.195875616 +0000 UTC m=+1047.435495460" watchObservedRunningTime="2025-11-23 15:03:16.200985356 +0000 UTC m=+1047.440605190" Nov 23 15:03:16 crc kubenswrapper[4718]: E1123 15:03:16.702432 4718 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6dc516cef32b7fbe995062e5cc4ad9f846c213228ece8d7bb5f11d4ccf03f112/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6dc516cef32b7fbe995062e5cc4ad9f846c213228ece8d7bb5f11d4ccf03f112/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_b9b11789-7642-4d03-a060-26842da8ab4b/ceilometer-notification-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_b9b11789-7642-4d03-a060-26842da8ab4b/ceilometer-notification-agent/0.log: no such file or directory Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.324141 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-589b8777c9-j8mvv"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.327026 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.330901 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.331031 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.332661 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.348090 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-589b8777c9-j8mvv"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.412855 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62536478-1337-4bad-b5e3-77cf6dd4d54b-etc-swift\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.412950 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62536478-1337-4bad-b5e3-77cf6dd4d54b-run-httpd\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.412971 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlhn\" (UniqueName: \"kubernetes.io/projected/62536478-1337-4bad-b5e3-77cf6dd4d54b-kube-api-access-4zlhn\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.412993 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62536478-1337-4bad-b5e3-77cf6dd4d54b-log-httpd\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.413020 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-internal-tls-certs\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.413037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-public-tls-certs\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.413097 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-combined-ca-bundle\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.413166 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-config-data\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515237 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62536478-1337-4bad-b5e3-77cf6dd4d54b-run-httpd\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515281 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zlhn\" (UniqueName: \"kubernetes.io/projected/62536478-1337-4bad-b5e3-77cf6dd4d54b-kube-api-access-4zlhn\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515308 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62536478-1337-4bad-b5e3-77cf6dd4d54b-log-httpd\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515334 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-internal-tls-certs\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515355 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-public-tls-certs\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515400 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-combined-ca-bundle\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515485 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-config-data\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515535 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62536478-1337-4bad-b5e3-77cf6dd4d54b-etc-swift\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.515901 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62536478-1337-4bad-b5e3-77cf6dd4d54b-run-httpd\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.516292 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62536478-1337-4bad-b5e3-77cf6dd4d54b-log-httpd\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.523192 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-internal-tls-certs\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.528166 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-combined-ca-bundle\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.535791 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62536478-1337-4bad-b5e3-77cf6dd4d54b-etc-swift\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.536376 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-public-tls-certs\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.536637 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62536478-1337-4bad-b5e3-77cf6dd4d54b-config-data\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.540352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zlhn\" (UniqueName: \"kubernetes.io/projected/62536478-1337-4bad-b5e3-77cf6dd4d54b-kube-api-access-4zlhn\") pod \"swift-proxy-589b8777c9-j8mvv\" (UID: \"62536478-1337-4bad-b5e3-77cf6dd4d54b\") " pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.585056 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dmjbz"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.586176 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.617128 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srptv\" (UniqueName: \"kubernetes.io/projected/2050dab9-a003-4e88-b06c-5fb9cadd5956-kube-api-access-srptv\") pod \"nova-api-db-create-dmjbz\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.617190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2050dab9-a003-4e88-b06c-5fb9cadd5956-operator-scripts\") pod \"nova-api-db-create-dmjbz\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.623785 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dmjbz"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.666260 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.688797 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-txb4n"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.689947 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.708753 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-txb4n"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.718411 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-operator-scripts\") pod \"nova-cell0-db-create-txb4n\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.718468 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srptv\" (UniqueName: \"kubernetes.io/projected/2050dab9-a003-4e88-b06c-5fb9cadd5956-kube-api-access-srptv\") pod \"nova-api-db-create-dmjbz\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.718505 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2050dab9-a003-4e88-b06c-5fb9cadd5956-operator-scripts\") pod \"nova-api-db-create-dmjbz\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.718578 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdgx\" (UniqueName: \"kubernetes.io/projected/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-kube-api-access-rhdgx\") pod \"nova-cell0-db-create-txb4n\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.719851 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2050dab9-a003-4e88-b06c-5fb9cadd5956-operator-scripts\") pod \"nova-api-db-create-dmjbz\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.723598 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-722a-account-create-z26wb"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.725177 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.727259 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.741242 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srptv\" (UniqueName: \"kubernetes.io/projected/2050dab9-a003-4e88-b06c-5fb9cadd5956-kube-api-access-srptv\") pod \"nova-api-db-create-dmjbz\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.744599 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-722a-account-create-z26wb"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.820367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-operator-scripts\") pod \"nova-cell0-db-create-txb4n\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.820502 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbtcr\" (UniqueName: \"kubernetes.io/projected/91200876-6356-412a-b33a-1fe4ccb7ac38-kube-api-access-kbtcr\") pod \"nova-api-722a-account-create-z26wb\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.820527 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91200876-6356-412a-b33a-1fe4ccb7ac38-operator-scripts\") pod \"nova-api-722a-account-create-z26wb\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.820557 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhdgx\" (UniqueName: \"kubernetes.io/projected/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-kube-api-access-rhdgx\") pod \"nova-cell0-db-create-txb4n\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.821303 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-operator-scripts\") pod \"nova-cell0-db-create-txb4n\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.841146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhdgx\" (UniqueName: \"kubernetes.io/projected/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-kube-api-access-rhdgx\") pod \"nova-cell0-db-create-txb4n\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.892454 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z9f2r"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.893923 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.917317 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z9f2r"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.922037 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5qxt\" (UniqueName: \"kubernetes.io/projected/4a532b56-cb65-4f6c-bee8-a72cf66ead01-kube-api-access-h5qxt\") pod \"nova-cell1-db-create-z9f2r\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.922147 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbtcr\" (UniqueName: \"kubernetes.io/projected/91200876-6356-412a-b33a-1fe4ccb7ac38-kube-api-access-kbtcr\") pod \"nova-api-722a-account-create-z26wb\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.922178 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91200876-6356-412a-b33a-1fe4ccb7ac38-operator-scripts\") pod \"nova-api-722a-account-create-z26wb\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.922256 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a532b56-cb65-4f6c-bee8-a72cf66ead01-operator-scripts\") pod \"nova-cell1-db-create-z9f2r\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.923295 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91200876-6356-412a-b33a-1fe4ccb7ac38-operator-scripts\") pod \"nova-api-722a-account-create-z26wb\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.928946 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.948190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbtcr\" (UniqueName: \"kubernetes.io/projected/91200876-6356-412a-b33a-1fe4ccb7ac38-kube-api-access-kbtcr\") pod \"nova-api-722a-account-create-z26wb\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.967651 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6b90-account-create-9llwr"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.970114 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.980816 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6b90-account-create-9llwr"] Nov 23 15:03:17 crc kubenswrapper[4718]: I1123 15:03:17.995695 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.008201 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.008496 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-central-agent" containerID="cri-o://8aea759c87cbd913895f69805c9fdeecaf55bb9d1cbc1219826f0b4cfe4f2adf" gracePeriod=30 Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.009200 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="proxy-httpd" containerID="cri-o://78cff56b90a780414e1fa5e313094c791ef97eb867f7b4d9bf6533570cecdc75" gracePeriod=30 Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.009247 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="sg-core" containerID="cri-o://47d4fa454c9683a2dc39b85e1937880f2876ce282461eed942170add5cba7ed4" gracePeriod=30 Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.009296 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-notification-agent" containerID="cri-o://ceaf2b73c7bf8744805d2a7cfdb0d60e152213c03b9c16ff8c2df4a3c7035586" gracePeriod=30 Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.017833 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.028211 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5qxt\" (UniqueName: \"kubernetes.io/projected/4a532b56-cb65-4f6c-bee8-a72cf66ead01-kube-api-access-h5qxt\") pod \"nova-cell1-db-create-z9f2r\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.028707 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09de4533-d2f0-42cc-8bc2-170efce7f2e7-operator-scripts\") pod \"nova-cell0-6b90-account-create-9llwr\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.028787 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx7hp\" (UniqueName: \"kubernetes.io/projected/09de4533-d2f0-42cc-8bc2-170efce7f2e7-kube-api-access-nx7hp\") pod \"nova-cell0-6b90-account-create-9llwr\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.028945 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a532b56-cb65-4f6c-bee8-a72cf66ead01-operator-scripts\") pod \"nova-cell1-db-create-z9f2r\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.029730 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a532b56-cb65-4f6c-bee8-a72cf66ead01-operator-scripts\") pod \"nova-cell1-db-create-z9f2r\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.030518 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.071380 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5qxt\" (UniqueName: \"kubernetes.io/projected/4a532b56-cb65-4f6c-bee8-a72cf66ead01-kube-api-access-h5qxt\") pod \"nova-cell1-db-create-z9f2r\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.087430 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cae1-account-create-gmjg7"] Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.089024 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.090880 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.098077 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cae1-account-create-gmjg7"] Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.128622 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:18 crc kubenswrapper[4718]: W1123 15:03:18.131085 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice/crio-6050772260a85e73dbc998e01d680a6da0078372e31204af4ba0c48b528323e8.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice/crio-6050772260a85e73dbc998e01d680a6da0078372e31204af4ba0c48b528323e8.scope: no such file or directory Nov 23 15:03:18 crc kubenswrapper[4718]: W1123 15:03:18.131173 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5.scope: no such file or directory Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.131679 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9475dfac-7aca-455a-ac54-158d673a4b6c-operator-scripts\") pod \"nova-cell1-cae1-account-create-gmjg7\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.131782 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09de4533-d2f0-42cc-8bc2-170efce7f2e7-operator-scripts\") pod \"nova-cell0-6b90-account-create-9llwr\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.131833 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzgg6\" (UniqueName: \"kubernetes.io/projected/9475dfac-7aca-455a-ac54-158d673a4b6c-kube-api-access-jzgg6\") pod \"nova-cell1-cae1-account-create-gmjg7\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.131862 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx7hp\" (UniqueName: \"kubernetes.io/projected/09de4533-d2f0-42cc-8bc2-170efce7f2e7-kube-api-access-nx7hp\") pod \"nova-cell0-6b90-account-create-9llwr\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.133716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09de4533-d2f0-42cc-8bc2-170efce7f2e7-operator-scripts\") pod \"nova-cell0-6b90-account-create-9llwr\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:18 crc kubenswrapper[4718]: W1123 15:03:18.140861 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-conmon-801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-conmon-801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d.scope: no such file or directory Nov 23 15:03:18 crc kubenswrapper[4718]: W1123 15:03:18.140922 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-801c8d94669a0fe6bb445e9853e0756bf5ab321fc956557e7147de76d46e8e6d.scope: no such file or directory Nov 23 15:03:18 crc kubenswrapper[4718]: W1123 15:03:18.147839 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice/crio-conmon-f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice/crio-conmon-f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361.scope: no such file or directory Nov 23 15:03:18 crc kubenswrapper[4718]: W1123 15:03:18.148502 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice/crio-f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice/crio-f4c1f7b6f58a09fdd334da6405a31e21a30c1a87a152b5e83bb3f09855109361.scope: no such file or directory Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.163238 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx7hp\" (UniqueName: \"kubernetes.io/projected/09de4533-d2f0-42cc-8bc2-170efce7f2e7-kube-api-access-nx7hp\") pod \"nova-cell0-6b90-account-create-9llwr\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.214027 4718 generic.go:334] "Generic (PLEG): container finished" podID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerID="78cff56b90a780414e1fa5e313094c791ef97eb867f7b4d9bf6533570cecdc75" exitCode=0 Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.214086 4718 generic.go:334] "Generic (PLEG): container finished" podID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerID="47d4fa454c9683a2dc39b85e1937880f2876ce282461eed942170add5cba7ed4" exitCode=2 Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.214129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerDied","Data":"78cff56b90a780414e1fa5e313094c791ef97eb867f7b4d9bf6533570cecdc75"} Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.214155 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerDied","Data":"47d4fa454c9683a2dc39b85e1937880f2876ce282461eed942170add5cba7ed4"} Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.219391 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.230928 4718 generic.go:334] "Generic (PLEG): container finished" podID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerID="5060536a7c2e7d7ff42bbf0da0aaf1f0ffb07148047ba73cb4ff2fae01e7acc2" exitCode=137 Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.230979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5696759568-pxlzs" event={"ID":"5dba5adf-299f-404c-91f9-c5848e9babe4","Type":"ContainerDied","Data":"5060536a7c2e7d7ff42bbf0da0aaf1f0ffb07148047ba73cb4ff2fae01e7acc2"} Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.234461 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9475dfac-7aca-455a-ac54-158d673a4b6c-operator-scripts\") pod \"nova-cell1-cae1-account-create-gmjg7\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.234554 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzgg6\" (UniqueName: \"kubernetes.io/projected/9475dfac-7aca-455a-ac54-158d673a4b6c-kube-api-access-jzgg6\") pod \"nova-cell1-cae1-account-create-gmjg7\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.244070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9475dfac-7aca-455a-ac54-158d673a4b6c-operator-scripts\") pod \"nova-cell1-cae1-account-create-gmjg7\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.261081 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzgg6\" (UniqueName: \"kubernetes.io/projected/9475dfac-7aca-455a-ac54-158d673a4b6c-kube-api-access-jzgg6\") pod \"nova-cell1-cae1-account-create-gmjg7\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.317564 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.357017 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-589b8777c9-j8mvv"] Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.414059 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:18 crc kubenswrapper[4718]: E1123 15:03:18.419134 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice/crio-7a3facdfdd0ea72aa7111662fac60ea2f2ddac429c79367211d27a101bb95c75\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-d1dee27b40446b32d8439280124f70215db5951dd90790640802867a91514afa\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8a309d_d182_4ad3_9818_a2c47fa25cec.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f563293_98ea_4cd4_9067_95a64333591d.slice/crio-conmon-19aefae9eafc6814302ce2784b3e12ccd8963356bd809309bf30280688a4e6a5.scope\": RecentStats: unable to find data in memory cache]" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.579932 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.645931 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dmjbz"] Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.794241 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-txb4n"] Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.803875 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-722a-account-create-z26wb"] Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.838142 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.958814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-secret-key\") pod \"5dba5adf-299f-404c-91f9-c5848e9babe4\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.958921 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-tls-certs\") pod \"5dba5adf-299f-404c-91f9-c5848e9babe4\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.958971 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dba5adf-299f-404c-91f9-c5848e9babe4-logs\") pod \"5dba5adf-299f-404c-91f9-c5848e9babe4\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.959049 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-scripts\") pod \"5dba5adf-299f-404c-91f9-c5848e9babe4\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.959091 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-config-data\") pod \"5dba5adf-299f-404c-91f9-c5848e9babe4\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.959121 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-combined-ca-bundle\") pod \"5dba5adf-299f-404c-91f9-c5848e9babe4\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.959163 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78kbg\" (UniqueName: \"kubernetes.io/projected/5dba5adf-299f-404c-91f9-c5848e9babe4-kube-api-access-78kbg\") pod \"5dba5adf-299f-404c-91f9-c5848e9babe4\" (UID: \"5dba5adf-299f-404c-91f9-c5848e9babe4\") " Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.963775 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dba5adf-299f-404c-91f9-c5848e9babe4-logs" (OuterVolumeSpecName: "logs") pod "5dba5adf-299f-404c-91f9-c5848e9babe4" (UID: "5dba5adf-299f-404c-91f9-c5848e9babe4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:18 crc kubenswrapper[4718]: I1123 15:03:18.994749 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5dba5adf-299f-404c-91f9-c5848e9babe4" (UID: "5dba5adf-299f-404c-91f9-c5848e9babe4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.003290 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dba5adf-299f-404c-91f9-c5848e9babe4-kube-api-access-78kbg" (OuterVolumeSpecName: "kube-api-access-78kbg") pod "5dba5adf-299f-404c-91f9-c5848e9babe4" (UID: "5dba5adf-299f-404c-91f9-c5848e9babe4"). InnerVolumeSpecName "kube-api-access-78kbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.006732 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z9f2r"] Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.061711 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78kbg\" (UniqueName: \"kubernetes.io/projected/5dba5adf-299f-404c-91f9-c5848e9babe4-kube-api-access-78kbg\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.062092 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.062156 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dba5adf-299f-404c-91f9-c5848e9babe4-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.170761 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cae1-account-create-gmjg7"] Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.179837 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6b90-account-create-9llwr"] Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.207638 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-config-data" (OuterVolumeSpecName: "config-data") pod "5dba5adf-299f-404c-91f9-c5848e9babe4" (UID: "5dba5adf-299f-404c-91f9-c5848e9babe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.214092 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dba5adf-299f-404c-91f9-c5848e9babe4" (UID: "5dba5adf-299f-404c-91f9-c5848e9babe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.224062 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-scripts" (OuterVolumeSpecName: "scripts") pod "5dba5adf-299f-404c-91f9-c5848e9babe4" (UID: "5dba5adf-299f-404c-91f9-c5848e9babe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.263540 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-589b8777c9-j8mvv" event={"ID":"62536478-1337-4bad-b5e3-77cf6dd4d54b","Type":"ContainerStarted","Data":"e0969ff03e7ec40d2ea68b300813276143cb75bd34038c32a0f8ecbe6c62bb08"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.263604 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-589b8777c9-j8mvv" event={"ID":"62536478-1337-4bad-b5e3-77cf6dd4d54b","Type":"ContainerStarted","Data":"1da44faea59888a4843764a273b982253c0cf4e917a23b95e4daf23dab17a245"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.268868 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.269037 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dba5adf-299f-404c-91f9-c5848e9babe4-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.269057 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.283930 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z9f2r" event={"ID":"4a532b56-cb65-4f6c-bee8-a72cf66ead01","Type":"ContainerStarted","Data":"d2c26b937bcaeeac80f345e4f17b8cd5202541aff975c24888afae82e1c14064"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.294181 4718 generic.go:334] "Generic (PLEG): container finished" podID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerID="8aea759c87cbd913895f69805c9fdeecaf55bb9d1cbc1219826f0b4cfe4f2adf" exitCode=0 Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.294262 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerDied","Data":"8aea759c87cbd913895f69805c9fdeecaf55bb9d1cbc1219826f0b4cfe4f2adf"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.303245 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5696759568-pxlzs" event={"ID":"5dba5adf-299f-404c-91f9-c5848e9babe4","Type":"ContainerDied","Data":"459928a47f6fdd1add46415725df9b5ca31f042b6679472b21e0a03f7b653f00"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.303326 4718 scope.go:117] "RemoveContainer" containerID="b33354d724c85f5c24909dfdf0de020feb9b6de40d0a896f864c2518ec57bb14" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.303290 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5696759568-pxlzs" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.319144 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-722a-account-create-z26wb" event={"ID":"91200876-6356-412a-b33a-1fe4ccb7ac38","Type":"ContainerStarted","Data":"4a0f350260be79e721b5110ef4eed811af3e82b1f33f4247c0c2579b0df280cc"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.321304 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "5dba5adf-299f-404c-91f9-c5848e9babe4" (UID: "5dba5adf-299f-404c-91f9-c5848e9babe4"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.329154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cae1-account-create-gmjg7" event={"ID":"9475dfac-7aca-455a-ac54-158d673a4b6c","Type":"ContainerStarted","Data":"2d51adf54dd1fea0c3d7a8cf2632d72893c10bca187db11b1ea439b1ef91627b"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.365408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmjbz" event={"ID":"2050dab9-a003-4e88-b06c-5fb9cadd5956","Type":"ContainerStarted","Data":"734ac2a3f45f3019f1e2c7519f8c5a54c9a4a53b7586e139d3953c703d011162"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.365596 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmjbz" event={"ID":"2050dab9-a003-4e88-b06c-5fb9cadd5956","Type":"ContainerStarted","Data":"2262ec3347546fbd78ca76a52f116c9ee22f5b7810448f59b7584a6c2b4a09f9"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.370706 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dba5adf-299f-404c-91f9-c5848e9babe4-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.380504 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-722a-account-create-z26wb" podStartSLOduration=2.380486683 podStartE2EDuration="2.380486683s" podCreationTimestamp="2025-11-23 15:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:19.372352981 +0000 UTC m=+1050.611972855" watchObservedRunningTime="2025-11-23 15:03:19.380486683 +0000 UTC m=+1050.620106527" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.383849 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6b90-account-create-9llwr" event={"ID":"09de4533-d2f0-42cc-8bc2-170efce7f2e7","Type":"ContainerStarted","Data":"ebf805d5c7917ee1c9c8b02879386601015fd56ead56dd2abf2c705d144f6f90"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.410563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-txb4n" event={"ID":"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f","Type":"ContainerStarted","Data":"1a135844847dac2b5cbbac24a60b3ab32f39f663d031986f0e847fb7a90640e0"} Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.416072 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-dmjbz" podStartSLOduration=2.416050693 podStartE2EDuration="2.416050693s" podCreationTimestamp="2025-11-23 15:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:19.394836064 +0000 UTC m=+1050.634455918" watchObservedRunningTime="2025-11-23 15:03:19.416050693 +0000 UTC m=+1050.655670537" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.458218 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-txb4n" podStartSLOduration=2.458195254 podStartE2EDuration="2.458195254s" podCreationTimestamp="2025-11-23 15:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:19.444434418 +0000 UTC m=+1050.684067972" watchObservedRunningTime="2025-11-23 15:03:19.458195254 +0000 UTC m=+1050.697815088" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.594791 4718 scope.go:117] "RemoveContainer" containerID="5060536a7c2e7d7ff42bbf0da0aaf1f0ffb07148047ba73cb4ff2fae01e7acc2" Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.718899 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5696759568-pxlzs"] Nov 23 15:03:19 crc kubenswrapper[4718]: I1123 15:03:19.726432 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5696759568-pxlzs"] Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.424215 4718 generic.go:334] "Generic (PLEG): container finished" podID="9475dfac-7aca-455a-ac54-158d673a4b6c" containerID="780a5c0cc0a14b2dabe6f3418818247a9c2b067524618dccedb2afc7ea504c0a" exitCode=0 Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.424276 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cae1-account-create-gmjg7" event={"ID":"9475dfac-7aca-455a-ac54-158d673a4b6c","Type":"ContainerDied","Data":"780a5c0cc0a14b2dabe6f3418818247a9c2b067524618dccedb2afc7ea504c0a"} Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.427642 4718 generic.go:334] "Generic (PLEG): container finished" podID="2050dab9-a003-4e88-b06c-5fb9cadd5956" containerID="734ac2a3f45f3019f1e2c7519f8c5a54c9a4a53b7586e139d3953c703d011162" exitCode=0 Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.427710 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmjbz" event={"ID":"2050dab9-a003-4e88-b06c-5fb9cadd5956","Type":"ContainerDied","Data":"734ac2a3f45f3019f1e2c7519f8c5a54c9a4a53b7586e139d3953c703d011162"} Nov 23 15:03:20 crc kubenswrapper[4718]: W1123 15:03:20.428745 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode60da8e4_38fb_49f2_9325_8d8d55109b49.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode60da8e4_38fb_49f2_9325_8d8d55109b49.slice: no such file or directory Nov 23 15:03:20 crc kubenswrapper[4718]: W1123 15:03:20.428808 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef450690_daaa_42a6_9b05_74dc308bdf50.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef450690_daaa_42a6_9b05_74dc308bdf50.slice: no such file or directory Nov 23 15:03:20 crc kubenswrapper[4718]: W1123 15:03:20.428835 4718 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c3b676_60b7_4d06_b284_2be1fd9824c5.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c3b676_60b7_4d06_b284_2be1fd9824c5.slice: no such file or directory Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.431252 4718 generic.go:334] "Generic (PLEG): container finished" podID="09de4533-d2f0-42cc-8bc2-170efce7f2e7" containerID="d1066a1af9b63fc9b869f056d6dafb7bca7a08016a551c9379cb7c6155f53e0f" exitCode=0 Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.431389 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6b90-account-create-9llwr" event={"ID":"09de4533-d2f0-42cc-8bc2-170efce7f2e7","Type":"ContainerDied","Data":"d1066a1af9b63fc9b869f056d6dafb7bca7a08016a551c9379cb7c6155f53e0f"} Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.461732 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" path="/var/lib/kubelet/pods/5dba5adf-299f-404c-91f9-c5848e9babe4/volumes" Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.463637 4718 generic.go:334] "Generic (PLEG): container finished" podID="9e6e2f92-5e58-4964-a7cf-e04f3c87b20f" containerID="d096ba372fe741fa0427963212ec8a73880cf7758c556e9ab109f837a775f8a3" exitCode=0 Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.463993 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-txb4n" event={"ID":"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f","Type":"ContainerDied","Data":"d096ba372fe741fa0427963212ec8a73880cf7758c556e9ab109f837a775f8a3"} Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.487704 4718 generic.go:334] "Generic (PLEG): container finished" podID="91200876-6356-412a-b33a-1fe4ccb7ac38" containerID="3f8a94a95d1222745e8bbeaaf74770c0ba13d44cdb450beaf9858acca915423c" exitCode=0 Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.487801 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-722a-account-create-z26wb" event={"ID":"91200876-6356-412a-b33a-1fe4ccb7ac38","Type":"ContainerDied","Data":"3f8a94a95d1222745e8bbeaaf74770c0ba13d44cdb450beaf9858acca915423c"} Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.499258 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-589b8777c9-j8mvv" event={"ID":"62536478-1337-4bad-b5e3-77cf6dd4d54b","Type":"ContainerStarted","Data":"7f8ef938dd0af81bee9f4e841f8f1ec42dc82d3470e7fe2821fbaa7355ff1215"} Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.503035 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.503055 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.510248 4718 generic.go:334] "Generic (PLEG): container finished" podID="4a532b56-cb65-4f6c-bee8-a72cf66ead01" containerID="bce4b8e30e462952bd2e068ce95bc2c5f7651ec34de3aa0a356265ee5af6574b" exitCode=0 Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.510318 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z9f2r" event={"ID":"4a532b56-cb65-4f6c-bee8-a72cf66ead01","Type":"ContainerDied","Data":"bce4b8e30e462952bd2e068ce95bc2c5f7651ec34de3aa0a356265ee5af6574b"} Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.605043 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-589b8777c9-j8mvv" podStartSLOduration=3.605024397 podStartE2EDuration="3.605024397s" podCreationTimestamp="2025-11-23 15:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:20.593418311 +0000 UTC m=+1051.833038165" watchObservedRunningTime="2025-11-23 15:03:20.605024397 +0000 UTC m=+1051.844644231" Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.808058 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:03:20 crc kubenswrapper[4718]: I1123 15:03:20.936362 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fdf4df4d-qlcjn" Nov 23 15:03:21 crc kubenswrapper[4718]: I1123 15:03:21.558218 4718 generic.go:334] "Generic (PLEG): container finished" podID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerID="ceaf2b73c7bf8744805d2a7cfdb0d60e152213c03b9c16ff8c2df4a3c7035586" exitCode=0 Nov 23 15:03:21 crc kubenswrapper[4718]: I1123 15:03:21.558426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerDied","Data":"ceaf2b73c7bf8744805d2a7cfdb0d60e152213c03b9c16ff8c2df4a3c7035586"} Nov 23 15:03:23 crc kubenswrapper[4718]: I1123 15:03:23.785172 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 15:03:24 crc kubenswrapper[4718]: I1123 15:03:24.054951 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.159:3000/\": dial tcp 10.217.0.159:3000: connect: connection refused" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.610502 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cae1-account-create-gmjg7" event={"ID":"9475dfac-7aca-455a-ac54-158d673a4b6c","Type":"ContainerDied","Data":"2d51adf54dd1fea0c3d7a8cf2632d72893c10bca187db11b1ea439b1ef91627b"} Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.610814 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d51adf54dd1fea0c3d7a8cf2632d72893c10bca187db11b1ea439b1ef91627b" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.613300 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmjbz" event={"ID":"2050dab9-a003-4e88-b06c-5fb9cadd5956","Type":"ContainerDied","Data":"2262ec3347546fbd78ca76a52f116c9ee22f5b7810448f59b7584a6c2b4a09f9"} Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.613341 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2262ec3347546fbd78ca76a52f116c9ee22f5b7810448f59b7584a6c2b4a09f9" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.615115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6b90-account-create-9llwr" event={"ID":"09de4533-d2f0-42cc-8bc2-170efce7f2e7","Type":"ContainerDied","Data":"ebf805d5c7917ee1c9c8b02879386601015fd56ead56dd2abf2c705d144f6f90"} Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.615143 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf805d5c7917ee1c9c8b02879386601015fd56ead56dd2abf2c705d144f6f90" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.616575 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-txb4n" event={"ID":"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f","Type":"ContainerDied","Data":"1a135844847dac2b5cbbac24a60b3ab32f39f663d031986f0e847fb7a90640e0"} Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.616606 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a135844847dac2b5cbbac24a60b3ab32f39f663d031986f0e847fb7a90640e0" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.619762 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-722a-account-create-z26wb" event={"ID":"91200876-6356-412a-b33a-1fe4ccb7ac38","Type":"ContainerDied","Data":"4a0f350260be79e721b5110ef4eed811af3e82b1f33f4247c0c2579b0df280cc"} Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.619796 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0f350260be79e721b5110ef4eed811af3e82b1f33f4247c0c2579b0df280cc" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.621539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z9f2r" event={"ID":"4a532b56-cb65-4f6c-bee8-a72cf66ead01","Type":"ContainerDied","Data":"d2c26b937bcaeeac80f345e4f17b8cd5202541aff975c24888afae82e1c14064"} Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.621569 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2c26b937bcaeeac80f345e4f17b8cd5202541aff975c24888afae82e1c14064" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.746278 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.752392 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.813925 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.825425 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.831292 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.840314 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.881543 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.893659 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09de4533-d2f0-42cc-8bc2-170efce7f2e7-operator-scripts\") pod \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.893948 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91200876-6356-412a-b33a-1fe4ccb7ac38-operator-scripts\") pod \"91200876-6356-412a-b33a-1fe4ccb7ac38\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.894083 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx7hp\" (UniqueName: \"kubernetes.io/projected/09de4533-d2f0-42cc-8bc2-170efce7f2e7-kube-api-access-nx7hp\") pod \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\" (UID: \"09de4533-d2f0-42cc-8bc2-170efce7f2e7\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.894187 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhdgx\" (UniqueName: \"kubernetes.io/projected/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-kube-api-access-rhdgx\") pod \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.894323 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-operator-scripts\") pod \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\" (UID: \"9e6e2f92-5e58-4964-a7cf-e04f3c87b20f\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.894460 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5qxt\" (UniqueName: \"kubernetes.io/projected/4a532b56-cb65-4f6c-bee8-a72cf66ead01-kube-api-access-h5qxt\") pod \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.894562 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbtcr\" (UniqueName: \"kubernetes.io/projected/91200876-6356-412a-b33a-1fe4ccb7ac38-kube-api-access-kbtcr\") pod \"91200876-6356-412a-b33a-1fe4ccb7ac38\" (UID: \"91200876-6356-412a-b33a-1fe4ccb7ac38\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.894718 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a532b56-cb65-4f6c-bee8-a72cf66ead01-operator-scripts\") pod \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\" (UID: \"4a532b56-cb65-4f6c-bee8-a72cf66ead01\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.896006 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a532b56-cb65-4f6c-bee8-a72cf66ead01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a532b56-cb65-4f6c-bee8-a72cf66ead01" (UID: "4a532b56-cb65-4f6c-bee8-a72cf66ead01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.897722 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e6e2f92-5e58-4964-a7cf-e04f3c87b20f" (UID: "9e6e2f92-5e58-4964-a7cf-e04f3c87b20f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.897920 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09de4533-d2f0-42cc-8bc2-170efce7f2e7-kube-api-access-nx7hp" (OuterVolumeSpecName: "kube-api-access-nx7hp") pod "09de4533-d2f0-42cc-8bc2-170efce7f2e7" (UID: "09de4533-d2f0-42cc-8bc2-170efce7f2e7"). InnerVolumeSpecName "kube-api-access-nx7hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.900224 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-kube-api-access-rhdgx" (OuterVolumeSpecName: "kube-api-access-rhdgx") pod "9e6e2f92-5e58-4964-a7cf-e04f3c87b20f" (UID: "9e6e2f92-5e58-4964-a7cf-e04f3c87b20f"). InnerVolumeSpecName "kube-api-access-rhdgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.901043 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09de4533-d2f0-42cc-8bc2-170efce7f2e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09de4533-d2f0-42cc-8bc2-170efce7f2e7" (UID: "09de4533-d2f0-42cc-8bc2-170efce7f2e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.903645 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91200876-6356-412a-b33a-1fe4ccb7ac38-kube-api-access-kbtcr" (OuterVolumeSpecName: "kube-api-access-kbtcr") pod "91200876-6356-412a-b33a-1fe4ccb7ac38" (UID: "91200876-6356-412a-b33a-1fe4ccb7ac38"). InnerVolumeSpecName "kube-api-access-kbtcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.904204 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a532b56-cb65-4f6c-bee8-a72cf66ead01-kube-api-access-h5qxt" (OuterVolumeSpecName: "kube-api-access-h5qxt") pod "4a532b56-cb65-4f6c-bee8-a72cf66ead01" (UID: "4a532b56-cb65-4f6c-bee8-a72cf66ead01"). InnerVolumeSpecName "kube-api-access-h5qxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.904612 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91200876-6356-412a-b33a-1fe4ccb7ac38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91200876-6356-412a-b33a-1fe4ccb7ac38" (UID: "91200876-6356-412a-b33a-1fe4ccb7ac38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.996880 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvhp9\" (UniqueName: \"kubernetes.io/projected/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-kube-api-access-xvhp9\") pod \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997280 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-log-httpd\") pod \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997308 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2050dab9-a003-4e88-b06c-5fb9cadd5956-operator-scripts\") pod \"2050dab9-a003-4e88-b06c-5fb9cadd5956\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997379 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-run-httpd\") pod \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-config-data\") pod \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997484 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-scripts\") pod \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997526 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-sg-core-conf-yaml\") pod \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997542 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9475dfac-7aca-455a-ac54-158d673a4b6c-operator-scripts\") pod \"9475dfac-7aca-455a-ac54-158d673a4b6c\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997575 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srptv\" (UniqueName: \"kubernetes.io/projected/2050dab9-a003-4e88-b06c-5fb9cadd5956-kube-api-access-srptv\") pod \"2050dab9-a003-4e88-b06c-5fb9cadd5956\" (UID: \"2050dab9-a003-4e88-b06c-5fb9cadd5956\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzgg6\" (UniqueName: \"kubernetes.io/projected/9475dfac-7aca-455a-ac54-158d673a4b6c-kube-api-access-jzgg6\") pod \"9475dfac-7aca-455a-ac54-158d673a4b6c\" (UID: \"9475dfac-7aca-455a-ac54-158d673a4b6c\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997629 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-combined-ca-bundle\") pod \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\" (UID: \"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8\") " Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997755 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" (UID: "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.997805 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" (UID: "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998103 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2050dab9-a003-4e88-b06c-5fb9cadd5956-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2050dab9-a003-4e88-b06c-5fb9cadd5956" (UID: "2050dab9-a003-4e88-b06c-5fb9cadd5956"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998304 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a532b56-cb65-4f6c-bee8-a72cf66ead01-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998323 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09de4533-d2f0-42cc-8bc2-170efce7f2e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998334 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91200876-6356-412a-b33a-1fe4ccb7ac38-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998343 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998351 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx7hp\" (UniqueName: \"kubernetes.io/projected/09de4533-d2f0-42cc-8bc2-170efce7f2e7-kube-api-access-nx7hp\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998361 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2050dab9-a003-4e88-b06c-5fb9cadd5956-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998370 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhdgx\" (UniqueName: \"kubernetes.io/projected/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-kube-api-access-rhdgx\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998431 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998529 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998538 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5qxt\" (UniqueName: \"kubernetes.io/projected/4a532b56-cb65-4f6c-bee8-a72cf66ead01-kube-api-access-h5qxt\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998547 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbtcr\" (UniqueName: \"kubernetes.io/projected/91200876-6356-412a-b33a-1fe4ccb7ac38-kube-api-access-kbtcr\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:26 crc kubenswrapper[4718]: I1123 15:03:26.998663 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9475dfac-7aca-455a-ac54-158d673a4b6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9475dfac-7aca-455a-ac54-158d673a4b6c" (UID: "9475dfac-7aca-455a-ac54-158d673a4b6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.000671 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-kube-api-access-xvhp9" (OuterVolumeSpecName: "kube-api-access-xvhp9") pod "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" (UID: "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8"). InnerVolumeSpecName "kube-api-access-xvhp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.001907 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2050dab9-a003-4e88-b06c-5fb9cadd5956-kube-api-access-srptv" (OuterVolumeSpecName: "kube-api-access-srptv") pod "2050dab9-a003-4e88-b06c-5fb9cadd5956" (UID: "2050dab9-a003-4e88-b06c-5fb9cadd5956"). InnerVolumeSpecName "kube-api-access-srptv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.004702 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-scripts" (OuterVolumeSpecName: "scripts") pod "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" (UID: "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.010580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9475dfac-7aca-455a-ac54-158d673a4b6c-kube-api-access-jzgg6" (OuterVolumeSpecName: "kube-api-access-jzgg6") pod "9475dfac-7aca-455a-ac54-158d673a4b6c" (UID: "9475dfac-7aca-455a-ac54-158d673a4b6c"). InnerVolumeSpecName "kube-api-access-jzgg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.024692 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" (UID: "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.074474 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" (UID: "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.095928 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-config-data" (OuterVolumeSpecName: "config-data") pod "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" (UID: "b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100222 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvhp9\" (UniqueName: \"kubernetes.io/projected/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-kube-api-access-xvhp9\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100256 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100267 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100276 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100284 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9475dfac-7aca-455a-ac54-158d673a4b6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100294 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srptv\" (UniqueName: \"kubernetes.io/projected/2050dab9-a003-4e88-b06c-5fb9cadd5956-kube-api-access-srptv\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100302 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzgg6\" (UniqueName: \"kubernetes.io/projected/9475dfac-7aca-455a-ac54-158d673a4b6c-kube-api-access-jzgg6\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.100309 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.631624 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8","Type":"ContainerDied","Data":"060226526cf6c190ac4cadd70e251d7fce6a90ad6a7917182b1605ea00dfce16"} Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.631675 4718 scope.go:117] "RemoveContainer" containerID="78cff56b90a780414e1fa5e313094c791ef97eb867f7b4d9bf6533570cecdc75" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.631676 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.638104 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-722a-account-create-z26wb" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.638120 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cae1-account-create-gmjg7" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.638128 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6b90-account-create-9llwr" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.638175 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a2720257-17da-4635-bd8c-2d65b9e8b9f0","Type":"ContainerStarted","Data":"7d684276a495f61f53c4a40edeee5f20a44709b468e6349f6f1f900f1ca1368b"} Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.638224 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-txb4n" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.638249 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9f2r" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.638268 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmjbz" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.662841 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.5319977 podStartE2EDuration="15.662824515s" podCreationTimestamp="2025-11-23 15:03:12 +0000 UTC" firstStartedPulling="2025-11-23 15:03:13.504794241 +0000 UTC m=+1044.744414085" lastFinishedPulling="2025-11-23 15:03:26.635621056 +0000 UTC m=+1057.875240900" observedRunningTime="2025-11-23 15:03:27.661852558 +0000 UTC m=+1058.901472402" watchObservedRunningTime="2025-11-23 15:03:27.662824515 +0000 UTC m=+1058.902444359" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.664657 4718 scope.go:117] "RemoveContainer" containerID="47d4fa454c9683a2dc39b85e1937880f2876ce282461eed942170add5cba7ed4" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.673819 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.677761 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-589b8777c9-j8mvv" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.735769 4718 scope.go:117] "RemoveContainer" containerID="ceaf2b73c7bf8744805d2a7cfdb0d60e152213c03b9c16ff8c2df4a3c7035586" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.768038 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.778663 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.800128 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802403 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="sg-core" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802431 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="sg-core" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802457 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-notification-agent" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802463 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-notification-agent" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802478 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2050dab9-a003-4e88-b06c-5fb9cadd5956" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802485 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2050dab9-a003-4e88-b06c-5fb9cadd5956" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802495 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a532b56-cb65-4f6c-bee8-a72cf66ead01" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802502 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a532b56-cb65-4f6c-bee8-a72cf66ead01" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802515 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-central-agent" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802520 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-central-agent" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802536 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6e2f92-5e58-4964-a7cf-e04f3c87b20f" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802542 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6e2f92-5e58-4964-a7cf-e04f3c87b20f" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802552 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802558 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802573 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="proxy-httpd" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802580 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="proxy-httpd" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802595 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09de4533-d2f0-42cc-8bc2-170efce7f2e7" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802601 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="09de4533-d2f0-42cc-8bc2-170efce7f2e7" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802615 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91200876-6356-412a-b33a-1fe4ccb7ac38" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802621 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="91200876-6356-412a-b33a-1fe4ccb7ac38" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802635 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9475dfac-7aca-455a-ac54-158d673a4b6c" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802641 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9475dfac-7aca-455a-ac54-158d673a4b6c" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: E1123 15:03:27.802654 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon-log" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802660 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon-log" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802865 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-notification-agent" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802879 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9475dfac-7aca-455a-ac54-158d673a4b6c" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802887 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a532b56-cb65-4f6c-bee8-a72cf66ead01" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802899 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="91200876-6356-412a-b33a-1fe4ccb7ac38" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802908 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="sg-core" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802920 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="09de4533-d2f0-42cc-8bc2-170efce7f2e7" containerName="mariadb-account-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802930 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon-log" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802936 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="proxy-httpd" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802945 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6e2f92-5e58-4964-a7cf-e04f3c87b20f" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802953 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2050dab9-a003-4e88-b06c-5fb9cadd5956" containerName="mariadb-database-create" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802964 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dba5adf-299f-404c-91f9-c5848e9babe4" containerName="horizon" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.802969 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" containerName="ceilometer-central-agent" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.805908 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.808719 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.813019 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.828006 4718 scope.go:117] "RemoveContainer" containerID="8aea759c87cbd913895f69805c9fdeecaf55bb9d1cbc1219826f0b4cfe4f2adf" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.838793 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.917491 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-config-data\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.917547 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-scripts\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.917592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-log-httpd\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.917759 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.917779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-run-httpd\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.917814 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx77q\" (UniqueName: \"kubernetes.io/projected/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-kube-api-access-jx77q\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:27 crc kubenswrapper[4718]: I1123 15:03:27.917854 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.019627 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.019712 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-run-httpd\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.020202 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-run-httpd\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.020276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx77q\" (UniqueName: \"kubernetes.io/projected/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-kube-api-access-jx77q\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.020684 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.021150 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-config-data\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.021495 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-scripts\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.021906 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-log-httpd\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.022198 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-log-httpd\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.027705 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-scripts\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.027976 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-config-data\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.028372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.035603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.042612 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx77q\" (UniqueName: \"kubernetes.io/projected/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-kube-api-access-jx77q\") pod \"ceilometer-0\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.134813 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.453203 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8" path="/var/lib/kubelet/pods/b6cce5fe-c8a2-4cec-bb3f-dbd7ce0009b8/volumes" Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.599065 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:03:28 crc kubenswrapper[4718]: W1123 15:03:28.603767 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35e2ff4_dfa3_4a4a_9625_efcdedac392b.slice/crio-ec644a7145775648c36d7553f81a8d6a33d6b62ca949fc1be8519a10eb3399ec WatchSource:0}: Error finding container ec644a7145775648c36d7553f81a8d6a33d6b62ca949fc1be8519a10eb3399ec: Status 404 returned error can't find the container with id ec644a7145775648c36d7553f81a8d6a33d6b62ca949fc1be8519a10eb3399ec Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.649672 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerStarted","Data":"ec644a7145775648c36d7553f81a8d6a33d6b62ca949fc1be8519a10eb3399ec"} Nov 23 15:03:28 crc kubenswrapper[4718]: I1123 15:03:28.868810 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:03:29 crc kubenswrapper[4718]: I1123 15:03:29.670227 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerStarted","Data":"11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19"} Nov 23 15:03:30 crc kubenswrapper[4718]: I1123 15:03:30.395405 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:30 crc kubenswrapper[4718]: I1123 15:03:30.680293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerStarted","Data":"7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818"} Nov 23 15:03:31 crc kubenswrapper[4718]: I1123 15:03:31.688766 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerStarted","Data":"3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1"} Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.028378 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d4jtn"] Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.029800 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.032330 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.032330 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-s8qd9" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.032521 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.047906 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d4jtn"] Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.112246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-scripts\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.112309 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-config-data\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.112377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54dd\" (UniqueName: \"kubernetes.io/projected/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-kube-api-access-s54dd\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.112412 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.214162 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-scripts\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.214227 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-config-data\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.214292 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54dd\" (UniqueName: \"kubernetes.io/projected/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-kube-api-access-s54dd\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.214330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.219406 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-scripts\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.219410 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.220187 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-config-data\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.230726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54dd\" (UniqueName: \"kubernetes.io/projected/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-kube-api-access-s54dd\") pod \"nova-cell0-conductor-db-sync-d4jtn\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.345189 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.635544 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6566df567c-72brl" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.698282 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57485cc44d-fj2x7"] Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.698580 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57485cc44d-fj2x7" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-api" containerID="cri-o://027f3f200ee28e1086fe86cf8460915abc92236677554634a41fa89d256cea22" gracePeriod=30 Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.699009 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57485cc44d-fj2x7" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-httpd" containerID="cri-o://bae1c5833127e59611ac73e1a9bce7b907b86a6fd777fae325f12b7f9860b496" gracePeriod=30 Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.717512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerStarted","Data":"4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade"} Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.717650 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-central-agent" containerID="cri-o://11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19" gracePeriod=30 Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.717677 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.717696 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="proxy-httpd" containerID="cri-o://4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade" gracePeriod=30 Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.717733 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="sg-core" containerID="cri-o://3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1" gracePeriod=30 Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.717770 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-notification-agent" containerID="cri-o://7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818" gracePeriod=30 Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.741318 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.000387815 podStartE2EDuration="6.741296901s" podCreationTimestamp="2025-11-23 15:03:27 +0000 UTC" firstStartedPulling="2025-11-23 15:03:28.605725242 +0000 UTC m=+1059.845345086" lastFinishedPulling="2025-11-23 15:03:33.346634328 +0000 UTC m=+1064.586254172" observedRunningTime="2025-11-23 15:03:33.738987588 +0000 UTC m=+1064.978607442" watchObservedRunningTime="2025-11-23 15:03:33.741296901 +0000 UTC m=+1064.980916745" Nov 23 15:03:33 crc kubenswrapper[4718]: I1123 15:03:33.894259 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d4jtn"] Nov 23 15:03:34 crc kubenswrapper[4718]: I1123 15:03:34.733104 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" event={"ID":"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34","Type":"ContainerStarted","Data":"00b8f4d4c16b82b0c94a0190445fbe0bbbbd0b6b1735ba138fb25a154b44ca62"} Nov 23 15:03:34 crc kubenswrapper[4718]: I1123 15:03:34.736498 4718 generic.go:334] "Generic (PLEG): container finished" podID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerID="bae1c5833127e59611ac73e1a9bce7b907b86a6fd777fae325f12b7f9860b496" exitCode=0 Nov 23 15:03:34 crc kubenswrapper[4718]: I1123 15:03:34.736554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57485cc44d-fj2x7" event={"ID":"e89948ec-25f4-4d02-985b-f9fdd43437d1","Type":"ContainerDied","Data":"bae1c5833127e59611ac73e1a9bce7b907b86a6fd777fae325f12b7f9860b496"} Nov 23 15:03:34 crc kubenswrapper[4718]: I1123 15:03:34.739865 4718 generic.go:334] "Generic (PLEG): container finished" podID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerID="3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1" exitCode=2 Nov 23 15:03:34 crc kubenswrapper[4718]: I1123 15:03:34.739896 4718 generic.go:334] "Generic (PLEG): container finished" podID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerID="7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818" exitCode=0 Nov 23 15:03:34 crc kubenswrapper[4718]: I1123 15:03:34.739915 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerDied","Data":"3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1"} Nov 23 15:03:34 crc kubenswrapper[4718]: I1123 15:03:34.739937 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerDied","Data":"7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818"} Nov 23 15:03:37 crc kubenswrapper[4718]: I1123 15:03:37.774076 4718 generic.go:334] "Generic (PLEG): container finished" podID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerID="11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19" exitCode=0 Nov 23 15:03:37 crc kubenswrapper[4718]: I1123 15:03:37.774649 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerDied","Data":"11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19"} Nov 23 15:03:39 crc kubenswrapper[4718]: I1123 15:03:39.395261 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:03:39 crc kubenswrapper[4718]: I1123 15:03:39.395822 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-log" containerID="cri-o://5fd7bde96f49172ceee57457fa3541e01976a1b89a385cecc7711665f83c8c20" gracePeriod=30 Nov 23 15:03:39 crc kubenswrapper[4718]: I1123 15:03:39.396218 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-httpd" containerID="cri-o://84c13bdace4917a8661ee3b99b9a1c2d2e3143c575041751bdf3393cc99e056a" gracePeriod=30 Nov 23 15:03:39 crc kubenswrapper[4718]: I1123 15:03:39.801262 4718 generic.go:334] "Generic (PLEG): container finished" podID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerID="5fd7bde96f49172ceee57457fa3541e01976a1b89a385cecc7711665f83c8c20" exitCode=143 Nov 23 15:03:39 crc kubenswrapper[4718]: I1123 15:03:39.801344 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a622ba5-be24-4cdd-a1eb-850f851ea41a","Type":"ContainerDied","Data":"5fd7bde96f49172ceee57457fa3541e01976a1b89a385cecc7711665f83c8c20"} Nov 23 15:03:39 crc kubenswrapper[4718]: I1123 15:03:39.805064 4718 generic.go:334] "Generic (PLEG): container finished" podID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerID="027f3f200ee28e1086fe86cf8460915abc92236677554634a41fa89d256cea22" exitCode=0 Nov 23 15:03:39 crc kubenswrapper[4718]: I1123 15:03:39.805110 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57485cc44d-fj2x7" event={"ID":"e89948ec-25f4-4d02-985b-f9fdd43437d1","Type":"ContainerDied","Data":"027f3f200ee28e1086fe86cf8460915abc92236677554634a41fa89d256cea22"} Nov 23 15:03:40 crc kubenswrapper[4718]: I1123 15:03:40.214928 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:03:40 crc kubenswrapper[4718]: I1123 15:03:40.215181 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-log" containerID="cri-o://c1aef4a1eaac963c42a4b7c228b213bbd1ac43b742d2cfab6a9c91930bb920ab" gracePeriod=30 Nov 23 15:03:40 crc kubenswrapper[4718]: I1123 15:03:40.215257 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-httpd" containerID="cri-o://5bda54d511db35bf484b5e54da4bec5107a0f415da4439587098774398852ae3" gracePeriod=30 Nov 23 15:03:40 crc kubenswrapper[4718]: I1123 15:03:40.815213 4718 generic.go:334] "Generic (PLEG): container finished" podID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerID="c1aef4a1eaac963c42a4b7c228b213bbd1ac43b742d2cfab6a9c91930bb920ab" exitCode=143 Nov 23 15:03:40 crc kubenswrapper[4718]: I1123 15:03:40.815298 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ab8b114-a092-4eb7-a379-fe5c259f1fe9","Type":"ContainerDied","Data":"c1aef4a1eaac963c42a4b7c228b213bbd1ac43b742d2cfab6a9c91930bb920ab"} Nov 23 15:03:41 crc kubenswrapper[4718]: I1123 15:03:41.945857 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.094626 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-combined-ca-bundle\") pod \"e89948ec-25f4-4d02-985b-f9fdd43437d1\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.094710 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-config\") pod \"e89948ec-25f4-4d02-985b-f9fdd43437d1\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.094854 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7p6b\" (UniqueName: \"kubernetes.io/projected/e89948ec-25f4-4d02-985b-f9fdd43437d1-kube-api-access-k7p6b\") pod \"e89948ec-25f4-4d02-985b-f9fdd43437d1\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.094881 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-httpd-config\") pod \"e89948ec-25f4-4d02-985b-f9fdd43437d1\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.094911 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-ovndb-tls-certs\") pod \"e89948ec-25f4-4d02-985b-f9fdd43437d1\" (UID: \"e89948ec-25f4-4d02-985b-f9fdd43437d1\") " Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.099057 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e89948ec-25f4-4d02-985b-f9fdd43437d1" (UID: "e89948ec-25f4-4d02-985b-f9fdd43437d1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.099555 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89948ec-25f4-4d02-985b-f9fdd43437d1-kube-api-access-k7p6b" (OuterVolumeSpecName: "kube-api-access-k7p6b") pod "e89948ec-25f4-4d02-985b-f9fdd43437d1" (UID: "e89948ec-25f4-4d02-985b-f9fdd43437d1"). InnerVolumeSpecName "kube-api-access-k7p6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.145184 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e89948ec-25f4-4d02-985b-f9fdd43437d1" (UID: "e89948ec-25f4-4d02-985b-f9fdd43437d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.146740 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-config" (OuterVolumeSpecName: "config") pod "e89948ec-25f4-4d02-985b-f9fdd43437d1" (UID: "e89948ec-25f4-4d02-985b-f9fdd43437d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.172926 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e89948ec-25f4-4d02-985b-f9fdd43437d1" (UID: "e89948ec-25f4-4d02-985b-f9fdd43437d1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.198074 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7p6b\" (UniqueName: \"kubernetes.io/projected/e89948ec-25f4-4d02-985b-f9fdd43437d1-kube-api-access-k7p6b\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.198107 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.198117 4718 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.198125 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.198133 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e89948ec-25f4-4d02-985b-f9fdd43437d1-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.549035 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:45184->10.217.0.153:9292: read: connection reset by peer" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.549075 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:45192->10.217.0.153:9292: read: connection reset by peer" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.838252 4718 generic.go:334] "Generic (PLEG): container finished" podID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerID="84c13bdace4917a8661ee3b99b9a1c2d2e3143c575041751bdf3393cc99e056a" exitCode=0 Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.838306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a622ba5-be24-4cdd-a1eb-850f851ea41a","Type":"ContainerDied","Data":"84c13bdace4917a8661ee3b99b9a1c2d2e3143c575041751bdf3393cc99e056a"} Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.840021 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" event={"ID":"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34","Type":"ContainerStarted","Data":"bbf363906b79c5b22a10349c4be4c0518138546ef59f20f6d612c1741a833b4b"} Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.845842 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57485cc44d-fj2x7" event={"ID":"e89948ec-25f4-4d02-985b-f9fdd43437d1","Type":"ContainerDied","Data":"7bed19f2044f7d120dd3130ea58adc866be8a93f37516a87c2385afd0d66d9c7"} Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.845886 4718 scope.go:117] "RemoveContainer" containerID="bae1c5833127e59611ac73e1a9bce7b907b86a6fd777fae325f12b7f9860b496" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.846071 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57485cc44d-fj2x7" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.870923 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" podStartSLOduration=2.126168602 podStartE2EDuration="9.87090147s" podCreationTimestamp="2025-11-23 15:03:33 +0000 UTC" firstStartedPulling="2025-11-23 15:03:33.906055508 +0000 UTC m=+1065.145675352" lastFinishedPulling="2025-11-23 15:03:41.650788376 +0000 UTC m=+1072.890408220" observedRunningTime="2025-11-23 15:03:42.86796376 +0000 UTC m=+1074.107583614" watchObservedRunningTime="2025-11-23 15:03:42.87090147 +0000 UTC m=+1074.110521304" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.883814 4718 scope.go:117] "RemoveContainer" containerID="027f3f200ee28e1086fe86cf8460915abc92236677554634a41fa89d256cea22" Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.923713 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57485cc44d-fj2x7"] Nov 23 15:03:42 crc kubenswrapper[4718]: I1123 15:03:42.953794 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-57485cc44d-fj2x7"] Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.071195 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.219905 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.219986 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-logs\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220025 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-combined-ca-bundle\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220172 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-httpd-run\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220187 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-config-data\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220206 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-public-tls-certs\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220278 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6vfq\" (UniqueName: \"kubernetes.io/projected/8a622ba5-be24-4cdd-a1eb-850f851ea41a-kube-api-access-t6vfq\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220295 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-scripts\") pod \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\" (UID: \"8a622ba5-be24-4cdd-a1eb-850f851ea41a\") " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220616 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220819 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.220920 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-logs" (OuterVolumeSpecName: "logs") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.226072 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a622ba5-be24-4cdd-a1eb-850f851ea41a-kube-api-access-t6vfq" (OuterVolumeSpecName: "kube-api-access-t6vfq") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "kube-api-access-t6vfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.226178 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.227570 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-scripts" (OuterVolumeSpecName: "scripts") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.262122 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.286647 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.287932 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-config-data" (OuterVolumeSpecName: "config-data") pod "8a622ba5-be24-4cdd-a1eb-850f851ea41a" (UID: "8a622ba5-be24-4cdd-a1eb-850f851ea41a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.322632 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.322669 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.322681 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6vfq\" (UniqueName: \"kubernetes.io/projected/8a622ba5-be24-4cdd-a1eb-850f851ea41a-kube-api-access-t6vfq\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.322690 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.322721 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.322731 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a622ba5-be24-4cdd-a1eb-850f851ea41a-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.322738 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a622ba5-be24-4cdd-a1eb-850f851ea41a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.343590 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.424706 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.858262 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.858830 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a622ba5-be24-4cdd-a1eb-850f851ea41a","Type":"ContainerDied","Data":"c333389cd5bb9061fcf5069dee044ddfda733f741b9edcd45b9026df6dc2cea5"} Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.859133 4718 scope.go:117] "RemoveContainer" containerID="84c13bdace4917a8661ee3b99b9a1c2d2e3143c575041751bdf3393cc99e056a" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.879705 4718 generic.go:334] "Generic (PLEG): container finished" podID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerID="5bda54d511db35bf484b5e54da4bec5107a0f415da4439587098774398852ae3" exitCode=0 Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.880451 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ab8b114-a092-4eb7-a379-fe5c259f1fe9","Type":"ContainerDied","Data":"5bda54d511db35bf484b5e54da4bec5107a0f415da4439587098774398852ae3"} Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.880482 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ab8b114-a092-4eb7-a379-fe5c259f1fe9","Type":"ContainerDied","Data":"e39ecd6bf971c74a6d459b21a597ddb045a821e15cf97759de649e4ea7e1b0cb"} Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.880495 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e39ecd6bf971c74a6d459b21a597ddb045a821e15cf97759de649e4ea7e1b0cb" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.886898 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.906364 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.917256 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.941629 4718 scope.go:117] "RemoveContainer" containerID="5fd7bde96f49172ceee57457fa3541e01976a1b89a385cecc7711665f83c8c20" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.950201 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:03:43 crc kubenswrapper[4718]: E1123 15:03:43.950792 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-log" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.950861 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-log" Nov 23 15:03:43 crc kubenswrapper[4718]: E1123 15:03:43.950924 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.950985 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: E1123 15:03:43.951046 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-log" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951097 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-log" Nov 23 15:03:43 crc kubenswrapper[4718]: E1123 15:03:43.951159 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951211 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: E1123 15:03:43.951277 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-api" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951328 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-api" Nov 23 15:03:43 crc kubenswrapper[4718]: E1123 15:03:43.951382 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951432 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951755 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951817 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-log" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951872 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-api" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951935 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" containerName="glance-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.951995 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" containerName="glance-log" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.952051 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" containerName="neutron-httpd" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.952966 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.958382 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.958389 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 15:03:43 crc kubenswrapper[4718]: I1123 15:03:43.979071 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034250 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-combined-ca-bundle\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034325 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-httpd-run\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034375 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdst9\" (UniqueName: \"kubernetes.io/projected/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-kube-api-access-sdst9\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034601 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034665 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-internal-tls-certs\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034714 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-logs\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034839 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-scripts\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.034885 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-config-data\") pod \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\" (UID: \"3ab8b114-a092-4eb7-a379-fe5c259f1fe9\") " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035148 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d380e0-2ed9-45ce-9c05-85b138e3a99a-logs\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035206 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34d380e0-2ed9-45ce-9c05-85b138e3a99a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035234 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-config-data\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035263 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-scripts\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035295 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035335 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrfnp\" (UniqueName: \"kubernetes.io/projected/34d380e0-2ed9-45ce-9c05-85b138e3a99a-kube-api-access-nrfnp\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035415 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035459 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035578 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.035905 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-logs" (OuterVolumeSpecName: "logs") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.040451 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-scripts" (OuterVolumeSpecName: "scripts") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.040604 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.072605 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.073333 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-kube-api-access-sdst9" (OuterVolumeSpecName: "kube-api-access-sdst9") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "kube-api-access-sdst9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.099545 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-config-data" (OuterVolumeSpecName: "config-data") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.103974 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3ab8b114-a092-4eb7-a379-fe5c259f1fe9" (UID: "3ab8b114-a092-4eb7-a379-fe5c259f1fe9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d380e0-2ed9-45ce-9c05-85b138e3a99a-logs\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137577 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34d380e0-2ed9-45ce-9c05-85b138e3a99a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137609 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-config-data\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137643 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-scripts\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137674 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137714 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrfnp\" (UniqueName: \"kubernetes.io/projected/34d380e0-2ed9-45ce-9c05-85b138e3a99a-kube-api-access-nrfnp\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137794 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137938 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137956 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137970 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137982 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.137993 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.138005 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.138016 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.138028 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdst9\" (UniqueName: \"kubernetes.io/projected/3ab8b114-a092-4eb7-a379-fe5c259f1fe9-kube-api-access-sdst9\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.143749 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.144111 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34d380e0-2ed9-45ce-9c05-85b138e3a99a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.144150 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d380e0-2ed9-45ce-9c05-85b138e3a99a-logs\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.146090 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-scripts\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.146679 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.148070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.150365 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d380e0-2ed9-45ce-9c05-85b138e3a99a-config-data\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.158825 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrfnp\" (UniqueName: \"kubernetes.io/projected/34d380e0-2ed9-45ce-9c05-85b138e3a99a-kube-api-access-nrfnp\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.170644 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.172206 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"34d380e0-2ed9-45ce-9c05-85b138e3a99a\") " pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.240249 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.280148 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.457020 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a622ba5-be24-4cdd-a1eb-850f851ea41a" path="/var/lib/kubelet/pods/8a622ba5-be24-4cdd-a1eb-850f851ea41a/volumes" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.458269 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89948ec-25f4-4d02-985b-f9fdd43437d1" path="/var/lib/kubelet/pods/e89948ec-25f4-4d02-985b-f9fdd43437d1/volumes" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.848292 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.890296 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34d380e0-2ed9-45ce-9c05-85b138e3a99a","Type":"ContainerStarted","Data":"1c784a194ddb687a7e177d1b3877e5d1063d8c85485910443aa9d21b550f1bda"} Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.890364 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.969793 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.976822 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.989756 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.991314 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.993457 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.993715 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 15:03:44 crc kubenswrapper[4718]: I1123 15:03:44.996244 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4mz\" (UniqueName: \"kubernetes.io/projected/09c60904-5047-4206-97a2-57b5c85a22d5-kube-api-access-bt4mz\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055126 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055184 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c60904-5047-4206-97a2-57b5c85a22d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055332 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055379 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055471 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.055501 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c60904-5047-4206-97a2-57b5c85a22d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.160830 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.160922 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161147 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c60904-5047-4206-97a2-57b5c85a22d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161195 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161216 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161300 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c60904-5047-4206-97a2-57b5c85a22d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161816 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09c60904-5047-4206-97a2-57b5c85a22d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161890 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.161927 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09c60904-5047-4206-97a2-57b5c85a22d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.162994 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4mz\" (UniqueName: \"kubernetes.io/projected/09c60904-5047-4206-97a2-57b5c85a22d5-kube-api-access-bt4mz\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.166894 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.191384 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4mz\" (UniqueName: \"kubernetes.io/projected/09c60904-5047-4206-97a2-57b5c85a22d5-kube-api-access-bt4mz\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.196824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.198678 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.205763 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.212734 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c60904-5047-4206-97a2-57b5c85a22d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"09c60904-5047-4206-97a2-57b5c85a22d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.313257 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.851868 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.907554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c60904-5047-4206-97a2-57b5c85a22d5","Type":"ContainerStarted","Data":"f406ed4037f41dff6b1ff19d4b710ec26406127568e2beb5b6eb081bcc118f76"} Nov 23 15:03:45 crc kubenswrapper[4718]: I1123 15:03:45.909714 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34d380e0-2ed9-45ce-9c05-85b138e3a99a","Type":"ContainerStarted","Data":"d1ba0b91653c6d5ad8f33448b42967cc8e2f72534de36728a650c26976f04189"} Nov 23 15:03:46 crc kubenswrapper[4718]: I1123 15:03:46.458870 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab8b114-a092-4eb7-a379-fe5c259f1fe9" path="/var/lib/kubelet/pods/3ab8b114-a092-4eb7-a379-fe5c259f1fe9/volumes" Nov 23 15:03:46 crc kubenswrapper[4718]: I1123 15:03:46.928276 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34d380e0-2ed9-45ce-9c05-85b138e3a99a","Type":"ContainerStarted","Data":"bf682f88c2efc71a7c51ddb04cd10b66308212e6ebde7ccfcc23b32eae80ed65"} Nov 23 15:03:46 crc kubenswrapper[4718]: I1123 15:03:46.931765 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c60904-5047-4206-97a2-57b5c85a22d5","Type":"ContainerStarted","Data":"962d9fcba1eedc6f0b31953394cb41688702dba71756bfa36b6a5258b9f9408a"} Nov 23 15:03:46 crc kubenswrapper[4718]: I1123 15:03:46.947293 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.947276197 podStartE2EDuration="3.947276197s" podCreationTimestamp="2025-11-23 15:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:46.945251252 +0000 UTC m=+1078.184871096" watchObservedRunningTime="2025-11-23 15:03:46.947276197 +0000 UTC m=+1078.186896041" Nov 23 15:03:47 crc kubenswrapper[4718]: I1123 15:03:47.941469 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"09c60904-5047-4206-97a2-57b5c85a22d5","Type":"ContainerStarted","Data":"52083a64987c315dd9a9efa8996e026a750efd3d14dbaef19635257614e65909"} Nov 23 15:03:47 crc kubenswrapper[4718]: I1123 15:03:47.961914 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.961894902 podStartE2EDuration="3.961894902s" podCreationTimestamp="2025-11-23 15:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:03:47.957839071 +0000 UTC m=+1079.197458915" watchObservedRunningTime="2025-11-23 15:03:47.961894902 +0000 UTC m=+1079.201514746" Nov 23 15:03:53 crc kubenswrapper[4718]: I1123 15:03:53.052923 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:03:53 crc kubenswrapper[4718]: I1123 15:03:53.053601 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:03:54 crc kubenswrapper[4718]: I1123 15:03:54.280634 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 15:03:54 crc kubenswrapper[4718]: I1123 15:03:54.281013 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 15:03:54 crc kubenswrapper[4718]: I1123 15:03:54.316013 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 15:03:54 crc kubenswrapper[4718]: I1123 15:03:54.332717 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 15:03:55 crc kubenswrapper[4718]: I1123 15:03:55.015090 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 15:03:55 crc kubenswrapper[4718]: I1123 15:03:55.015482 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 15:03:55 crc kubenswrapper[4718]: I1123 15:03:55.313518 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:55 crc kubenswrapper[4718]: I1123 15:03:55.313627 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:55 crc kubenswrapper[4718]: I1123 15:03:55.345578 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:55 crc kubenswrapper[4718]: I1123 15:03:55.374897 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:56 crc kubenswrapper[4718]: I1123 15:03:56.022364 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:56 crc kubenswrapper[4718]: I1123 15:03:56.022635 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:56 crc kubenswrapper[4718]: I1123 15:03:56.906202 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 15:03:56 crc kubenswrapper[4718]: I1123 15:03:56.909930 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 15:03:57 crc kubenswrapper[4718]: I1123 15:03:57.047938 4718 generic.go:334] "Generic (PLEG): container finished" podID="ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" containerID="bbf363906b79c5b22a10349c4be4c0518138546ef59f20f6d612c1741a833b4b" exitCode=0 Nov 23 15:03:57 crc kubenswrapper[4718]: I1123 15:03:57.048113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" event={"ID":"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34","Type":"ContainerDied","Data":"bbf363906b79c5b22a10349c4be4c0518138546ef59f20f6d612c1741a833b4b"} Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.019792 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.031849 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.139326 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.496585 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.622976 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-combined-ca-bundle\") pod \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.623120 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-scripts\") pod \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.623228 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s54dd\" (UniqueName: \"kubernetes.io/projected/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-kube-api-access-s54dd\") pod \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.623273 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-config-data\") pod \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\" (UID: \"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34\") " Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.632570 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-scripts" (OuterVolumeSpecName: "scripts") pod "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" (UID: "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.632835 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-kube-api-access-s54dd" (OuterVolumeSpecName: "kube-api-access-s54dd") pod "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" (UID: "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34"). InnerVolumeSpecName "kube-api-access-s54dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.655554 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" (UID: "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.655599 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-config-data" (OuterVolumeSpecName: "config-data") pod "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" (UID: "ec5dfd6e-eef7-47d1-a7bf-a3f406387f34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.725593 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s54dd\" (UniqueName: \"kubernetes.io/projected/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-kube-api-access-s54dd\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.725631 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.725645 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:58 crc kubenswrapper[4718]: I1123 15:03:58.725655 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.065858 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" event={"ID":"ec5dfd6e-eef7-47d1-a7bf-a3f406387f34","Type":"ContainerDied","Data":"00b8f4d4c16b82b0c94a0190445fbe0bbbbd0b6b1735ba138fb25a154b44ca62"} Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.066717 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00b8f4d4c16b82b0c94a0190445fbe0bbbbd0b6b1735ba138fb25a154b44ca62" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.066902 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d4jtn" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.162229 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 15:03:59 crc kubenswrapper[4718]: E1123 15:03:59.162930 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" containerName="nova-cell0-conductor-db-sync" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.163076 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" containerName="nova-cell0-conductor-db-sync" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.163374 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" containerName="nova-cell0-conductor-db-sync" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.164191 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.169123 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-s8qd9" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.169327 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.170821 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.234490 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qfr\" (UniqueName: \"kubernetes.io/projected/b70088cf-3265-44f6-b723-5c5317dd1f54-kube-api-access-z6qfr\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.235073 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70088cf-3265-44f6-b723-5c5317dd1f54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.235240 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70088cf-3265-44f6-b723-5c5317dd1f54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.336721 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70088cf-3265-44f6-b723-5c5317dd1f54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.336813 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70088cf-3265-44f6-b723-5c5317dd1f54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.336948 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6qfr\" (UniqueName: \"kubernetes.io/projected/b70088cf-3265-44f6-b723-5c5317dd1f54-kube-api-access-z6qfr\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.341401 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70088cf-3265-44f6-b723-5c5317dd1f54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.344017 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70088cf-3265-44f6-b723-5c5317dd1f54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.356029 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6qfr\" (UniqueName: \"kubernetes.io/projected/b70088cf-3265-44f6-b723-5c5317dd1f54-kube-api-access-z6qfr\") pod \"nova-cell0-conductor-0\" (UID: \"b70088cf-3265-44f6-b723-5c5317dd1f54\") " pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.483034 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 15:03:59 crc kubenswrapper[4718]: I1123 15:03:59.945010 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 15:04:00 crc kubenswrapper[4718]: I1123 15:04:00.076194 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b70088cf-3265-44f6-b723-5c5317dd1f54","Type":"ContainerStarted","Data":"ed95672b4979e702fef84e49befb923b85b61b238773e9714f1f713bc9d9342c"} Nov 23 15:04:01 crc kubenswrapper[4718]: I1123 15:04:01.087901 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b70088cf-3265-44f6-b723-5c5317dd1f54","Type":"ContainerStarted","Data":"27b739fc858f0d13989d88235589121c3fe9e19ef58038c565f35362b9b3d8e5"} Nov 23 15:04:01 crc kubenswrapper[4718]: I1123 15:04:01.088199 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 15:04:01 crc kubenswrapper[4718]: I1123 15:04:01.101713 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.101694111 podStartE2EDuration="2.101694111s" podCreationTimestamp="2025-11-23 15:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:01.100255242 +0000 UTC m=+1092.339875096" watchObservedRunningTime="2025-11-23 15:04:01.101694111 +0000 UTC m=+1092.341313955" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.700569 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.836944 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-combined-ca-bundle\") pod \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.837104 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-scripts\") pod \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.837147 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-log-httpd\") pod \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.837168 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-run-httpd\") pod \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.837253 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-sg-core-conf-yaml\") pod \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.837278 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx77q\" (UniqueName: \"kubernetes.io/projected/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-kube-api-access-jx77q\") pod \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.837320 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-config-data\") pod \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\" (UID: \"f35e2ff4-dfa3-4a4a-9625-efcdedac392b\") " Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.838473 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f35e2ff4-dfa3-4a4a-9625-efcdedac392b" (UID: "f35e2ff4-dfa3-4a4a-9625-efcdedac392b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.838645 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f35e2ff4-dfa3-4a4a-9625-efcdedac392b" (UID: "f35e2ff4-dfa3-4a4a-9625-efcdedac392b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.846755 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-kube-api-access-jx77q" (OuterVolumeSpecName: "kube-api-access-jx77q") pod "f35e2ff4-dfa3-4a4a-9625-efcdedac392b" (UID: "f35e2ff4-dfa3-4a4a-9625-efcdedac392b"). InnerVolumeSpecName "kube-api-access-jx77q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.876851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-scripts" (OuterVolumeSpecName: "scripts") pod "f35e2ff4-dfa3-4a4a-9625-efcdedac392b" (UID: "f35e2ff4-dfa3-4a4a-9625-efcdedac392b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.929802 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f35e2ff4-dfa3-4a4a-9625-efcdedac392b" (UID: "f35e2ff4-dfa3-4a4a-9625-efcdedac392b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.939514 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.939553 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.939568 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx77q\" (UniqueName: \"kubernetes.io/projected/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-kube-api-access-jx77q\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.939579 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.939589 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.979598 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35e2ff4-dfa3-4a4a-9625-efcdedac392b" (UID: "f35e2ff4-dfa3-4a4a-9625-efcdedac392b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:04 crc kubenswrapper[4718]: I1123 15:04:04.984849 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-config-data" (OuterVolumeSpecName: "config-data") pod "f35e2ff4-dfa3-4a4a-9625-efcdedac392b" (UID: "f35e2ff4-dfa3-4a4a-9625-efcdedac392b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.041234 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.041549 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35e2ff4-dfa3-4a4a-9625-efcdedac392b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.122969 4718 generic.go:334] "Generic (PLEG): container finished" podID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerID="4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade" exitCode=137 Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.123025 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerDied","Data":"4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade"} Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.123039 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.123062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f35e2ff4-dfa3-4a4a-9625-efcdedac392b","Type":"ContainerDied","Data":"ec644a7145775648c36d7553f81a8d6a33d6b62ca949fc1be8519a10eb3399ec"} Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.123085 4718 scope.go:117] "RemoveContainer" containerID="4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.147086 4718 scope.go:117] "RemoveContainer" containerID="3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.160531 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.168610 4718 scope.go:117] "RemoveContainer" containerID="7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.184907 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.201171 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.202017 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-notification-agent" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202093 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-notification-agent" Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.202165 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="sg-core" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202230 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="sg-core" Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.202289 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="proxy-httpd" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202341 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="proxy-httpd" Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.202396 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-central-agent" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202478 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-central-agent" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202750 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-notification-agent" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202837 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="proxy-httpd" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202899 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="sg-core" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.202965 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" containerName="ceilometer-central-agent" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.206485 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.206645 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.208183 4718 scope.go:117] "RemoveContainer" containerID="11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.215011 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.215189 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.236649 4718 scope.go:117] "RemoveContainer" containerID="4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade" Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.237104 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade\": container with ID starting with 4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade not found: ID does not exist" containerID="4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.237134 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade"} err="failed to get container status \"4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade\": rpc error: code = NotFound desc = could not find container \"4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade\": container with ID starting with 4d549e7f8aa1ab4213c74fa027af45cec1edffc63b8ebbe5bad0121bb6585ade not found: ID does not exist" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.237157 4718 scope.go:117] "RemoveContainer" containerID="3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1" Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.237365 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1\": container with ID starting with 3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1 not found: ID does not exist" containerID="3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.237386 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1"} err="failed to get container status \"3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1\": rpc error: code = NotFound desc = could not find container \"3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1\": container with ID starting with 3187af6325d1d343a03924f33f1c8fdd59bddfd31a0cb71fca7d854d4d69e1d1 not found: ID does not exist" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.237398 4718 scope.go:117] "RemoveContainer" containerID="7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818" Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.237965 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818\": container with ID starting with 7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818 not found: ID does not exist" containerID="7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.238006 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818"} err="failed to get container status \"7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818\": rpc error: code = NotFound desc = could not find container \"7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818\": container with ID starting with 7df4328ad81374dd829067a632d7c0937db6320917e78b1404f449789579c818 not found: ID does not exist" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.238031 4718 scope.go:117] "RemoveContainer" containerID="11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19" Nov 23 15:04:05 crc kubenswrapper[4718]: E1123 15:04:05.238251 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19\": container with ID starting with 11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19 not found: ID does not exist" containerID="11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.238278 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19"} err="failed to get container status \"11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19\": rpc error: code = NotFound desc = could not find container \"11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19\": container with ID starting with 11b6206986760c888485bfa2c219304a49d167bc0c38f1531943919d0b63cb19 not found: ID does not exist" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.245656 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-run-httpd\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.245692 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-log-httpd\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.245735 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-scripts\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.245756 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-config-data\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.245775 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khzc\" (UniqueName: \"kubernetes.io/projected/4e0372d5-8440-4058-964b-0d1b2023c706-kube-api-access-5khzc\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.245846 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.245893 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.346640 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-log-httpd\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.346762 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-scripts\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.346799 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-config-data\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.346834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5khzc\" (UniqueName: \"kubernetes.io/projected/4e0372d5-8440-4058-964b-0d1b2023c706-kube-api-access-5khzc\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.346977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.347072 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.347121 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-run-httpd\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.347858 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-run-httpd\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.348919 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-log-httpd\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.352359 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.352795 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-scripts\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.353726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.362115 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-config-data\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.377784 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5khzc\" (UniqueName: \"kubernetes.io/projected/4e0372d5-8440-4058-964b-0d1b2023c706-kube-api-access-5khzc\") pod \"ceilometer-0\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " pod="openstack/ceilometer-0" Nov 23 15:04:05 crc kubenswrapper[4718]: I1123 15:04:05.524573 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:06 crc kubenswrapper[4718]: W1123 15:04:06.020601 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e0372d5_8440_4058_964b_0d1b2023c706.slice/crio-5aaa5f225e8239bfa10e02e539543e2d8f2eea4a84d67975f60b63c692d212a2 WatchSource:0}: Error finding container 5aaa5f225e8239bfa10e02e539543e2d8f2eea4a84d67975f60b63c692d212a2: Status 404 returned error can't find the container with id 5aaa5f225e8239bfa10e02e539543e2d8f2eea4a84d67975f60b63c692d212a2 Nov 23 15:04:06 crc kubenswrapper[4718]: I1123 15:04:06.023550 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:06 crc kubenswrapper[4718]: I1123 15:04:06.133825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerStarted","Data":"5aaa5f225e8239bfa10e02e539543e2d8f2eea4a84d67975f60b63c692d212a2"} Nov 23 15:04:06 crc kubenswrapper[4718]: I1123 15:04:06.460878 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35e2ff4-dfa3-4a4a-9625-efcdedac392b" path="/var/lib/kubelet/pods/f35e2ff4-dfa3-4a4a-9625-efcdedac392b/volumes" Nov 23 15:04:07 crc kubenswrapper[4718]: I1123 15:04:07.146540 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerStarted","Data":"3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992"} Nov 23 15:04:08 crc kubenswrapper[4718]: I1123 15:04:08.174803 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerStarted","Data":"e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee"} Nov 23 15:04:09 crc kubenswrapper[4718]: I1123 15:04:09.185500 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerStarted","Data":"733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240"} Nov 23 15:04:09 crc kubenswrapper[4718]: I1123 15:04:09.531181 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.135499 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qtcsw"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.136991 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.140346 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.141814 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.149960 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qtcsw"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.241569 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-scripts\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.241652 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.241738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q76v\" (UniqueName: \"kubernetes.io/projected/c5d8481e-f2de-46c5-8b56-7c85e054378d-kube-api-access-2q76v\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.241769 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-config-data\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.343089 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.343366 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q76v\" (UniqueName: \"kubernetes.io/projected/c5d8481e-f2de-46c5-8b56-7c85e054378d-kube-api-access-2q76v\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.343388 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-config-data\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.343497 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-scripts\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.356601 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.358244 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.368915 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.372310 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.373505 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-scripts\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.378493 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q76v\" (UniqueName: \"kubernetes.io/projected/c5d8481e-f2de-46c5-8b56-7c85e054378d-kube-api-access-2q76v\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.379947 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-config-data\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.386175 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qtcsw\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.445548 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b26fa96-edcf-446b-8f92-4d7144c6186e-logs\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.445604 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.445662 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dp5h\" (UniqueName: \"kubernetes.io/projected/1b26fa96-edcf-446b-8f92-4d7144c6186e-kube-api-access-7dp5h\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.445721 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-config-data\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.464075 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.546832 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-config-data\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.547019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b26fa96-edcf-446b-8f92-4d7144c6186e-logs\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.547046 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.547093 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dp5h\" (UniqueName: \"kubernetes.io/projected/1b26fa96-edcf-446b-8f92-4d7144c6186e-kube-api-access-7dp5h\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.548099 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b26fa96-edcf-446b-8f92-4d7144c6186e-logs\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.551680 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-config-data\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.555153 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.567544 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dp5h\" (UniqueName: \"kubernetes.io/projected/1b26fa96-edcf-446b-8f92-4d7144c6186e-kube-api-access-7dp5h\") pod \"nova-api-0\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.592756 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.593879 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.595881 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.601363 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.649110 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvq8\" (UniqueName: \"kubernetes.io/projected/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-kube-api-access-vfvq8\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.649270 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.649327 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.695919 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.697255 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.701688 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.735777 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.753816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.753899 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-config-data\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.753946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbwdp\" (UniqueName: \"kubernetes.io/projected/44ab2204-132f-4820-a33e-6707a02629fa-kube-api-access-pbwdp\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.754000 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.754047 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.754125 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvq8\" (UniqueName: \"kubernetes.io/projected/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-kube-api-access-vfvq8\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.760741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.761155 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.769795 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.787828 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvq8\" (UniqueName: \"kubernetes.io/projected/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-kube-api-access-vfvq8\") pod \"nova-cell1-novncproxy-0\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.806652 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.830293 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.846472 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.861000 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/691bbae8-8ade-4e8b-86cf-3498eb24c347-logs\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.861079 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.861112 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6827\" (UniqueName: \"kubernetes.io/projected/691bbae8-8ade-4e8b-86cf-3498eb24c347-kube-api-access-p6827\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.861141 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.861178 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-config-data\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.861204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbwdp\" (UniqueName: \"kubernetes.io/projected/44ab2204-132f-4820-a33e-6707a02629fa-kube-api-access-pbwdp\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.861232 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-config-data\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.884875 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.892952 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbwdp\" (UniqueName: \"kubernetes.io/projected/44ab2204-132f-4820-a33e-6707a02629fa-kube-api-access-pbwdp\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.896760 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.908885 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-config-data\") pod \"nova-scheduler-0\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.940788 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5pqkd"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.942424 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.947114 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5pqkd"] Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.965185 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.967070 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/691bbae8-8ade-4e8b-86cf-3498eb24c347-logs\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.967136 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6827\" (UniqueName: \"kubernetes.io/projected/691bbae8-8ade-4e8b-86cf-3498eb24c347-kube-api-access-p6827\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.967160 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.967199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-config-data\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.968132 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/691bbae8-8ade-4e8b-86cf-3498eb24c347-logs\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.972006 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.972633 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-config-data\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:10 crc kubenswrapper[4718]: I1123 15:04:10.985665 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6827\" (UniqueName: \"kubernetes.io/projected/691bbae8-8ade-4e8b-86cf-3498eb24c347-kube-api-access-p6827\") pod \"nova-metadata-0\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " pod="openstack/nova-metadata-0" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.027323 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.073371 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.073547 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.073570 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-config\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.073626 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.073667 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlqm\" (UniqueName: \"kubernetes.io/projected/50b65a80-4058-4be6-b1c6-79e1fd2e081f-kube-api-access-2wlqm\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.073729 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.153925 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.176954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.176994 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-config\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.177049 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.177090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wlqm\" (UniqueName: \"kubernetes.io/projected/50b65a80-4058-4be6-b1c6-79e1fd2e081f-kube-api-access-2wlqm\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.177125 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.177160 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.178859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.179389 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.179398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.179934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.180473 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-config\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.195328 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wlqm\" (UniqueName: \"kubernetes.io/projected/50b65a80-4058-4be6-b1c6-79e1fd2e081f-kube-api-access-2wlqm\") pod \"dnsmasq-dns-757b4f8459-5pqkd\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.257908 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.515254 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2kvx"] Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.516800 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.519027 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.519296 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.527145 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2kvx"] Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.587701 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-scripts\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.587760 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2r9\" (UniqueName: \"kubernetes.io/projected/371cc685-4543-4483-83bf-bb04f1d750b5-kube-api-access-mq2r9\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.587790 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-config-data\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.587822 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.598652 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qtcsw"] Nov 23 15:04:11 crc kubenswrapper[4718]: W1123 15:04:11.627715 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5d8481e_f2de_46c5_8b56_7c85e054378d.slice/crio-d0b129e7cfa7dd9c2443f36ca03ad4ada61c1b2d6f352d85d64c194e7b3f85d4 WatchSource:0}: Error finding container d0b129e7cfa7dd9c2443f36ca03ad4ada61c1b2d6f352d85d64c194e7b3f85d4: Status 404 returned error can't find the container with id d0b129e7cfa7dd9c2443f36ca03ad4ada61c1b2d6f352d85d64c194e7b3f85d4 Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.689425 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-scripts\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.689496 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2r9\" (UniqueName: \"kubernetes.io/projected/371cc685-4543-4483-83bf-bb04f1d750b5-kube-api-access-mq2r9\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.689528 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-config-data\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.689555 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.695578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.697982 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-scripts\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.699319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-config-data\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.716535 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2r9\" (UniqueName: \"kubernetes.io/projected/371cc685-4543-4483-83bf-bb04f1d750b5-kube-api-access-mq2r9\") pod \"nova-cell1-conductor-db-sync-k2kvx\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.791857 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.805309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.847372 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:11 crc kubenswrapper[4718]: I1123 15:04:11.923561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.076407 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5pqkd"] Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.087883 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:12 crc kubenswrapper[4718]: W1123 15:04:12.112388 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod691bbae8_8ade_4e8b_86cf_3498eb24c347.slice/crio-d3099dc040223d66a2666ab6e3496d3f003b6784a5386a0c31f04af6fc18345e WatchSource:0}: Error finding container d3099dc040223d66a2666ab6e3496d3f003b6784a5386a0c31f04af6fc18345e: Status 404 returned error can't find the container with id d3099dc040223d66a2666ab6e3496d3f003b6784a5386a0c31f04af6fc18345e Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.232042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"44ab2204-132f-4820-a33e-6707a02629fa","Type":"ContainerStarted","Data":"4272e6dcda40d2eeb2b1fbef0738b3d6e6d9436e6318b7d8c3fc91477236eca6"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.233276 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b40ce109-6c6a-4a75-8aa9-c749fa12cee3","Type":"ContainerStarted","Data":"98a729f12bf82400e26e82e12639de82c210a6f2d9be0fbfd8542ce3aaae265a"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.234749 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b26fa96-edcf-446b-8f92-4d7144c6186e","Type":"ContainerStarted","Data":"5b9548041b3ce5d63cd622a4f4c1603fc7b84688b8b93c4c700b338c94e65ac6"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.238288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerStarted","Data":"ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.238425 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.243185 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qtcsw" event={"ID":"c5d8481e-f2de-46c5-8b56-7c85e054378d","Type":"ContainerStarted","Data":"20bdfde85037bd0b202b821ceb6b80b89569184ba715667609480e9b298198ae"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.243223 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qtcsw" event={"ID":"c5d8481e-f2de-46c5-8b56-7c85e054378d","Type":"ContainerStarted","Data":"d0b129e7cfa7dd9c2443f36ca03ad4ada61c1b2d6f352d85d64c194e7b3f85d4"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.246391 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" event={"ID":"50b65a80-4058-4be6-b1c6-79e1fd2e081f","Type":"ContainerStarted","Data":"5e540d50af039e291dab2c4543f9e11c3a4f3c1a9bc1e8268c115d9a3973511b"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.247519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"691bbae8-8ade-4e8b-86cf-3498eb24c347","Type":"ContainerStarted","Data":"d3099dc040223d66a2666ab6e3496d3f003b6784a5386a0c31f04af6fc18345e"} Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.256560 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.225475062 podStartE2EDuration="7.256544983s" podCreationTimestamp="2025-11-23 15:04:05 +0000 UTC" firstStartedPulling="2025-11-23 15:04:06.023906997 +0000 UTC m=+1097.263526841" lastFinishedPulling="2025-11-23 15:04:11.054976918 +0000 UTC m=+1102.294596762" observedRunningTime="2025-11-23 15:04:12.254726534 +0000 UTC m=+1103.494346378" watchObservedRunningTime="2025-11-23 15:04:12.256544983 +0000 UTC m=+1103.496164827" Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.288430 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qtcsw" podStartSLOduration=2.288413199 podStartE2EDuration="2.288413199s" podCreationTimestamp="2025-11-23 15:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:12.277911934 +0000 UTC m=+1103.517531778" watchObservedRunningTime="2025-11-23 15:04:12.288413199 +0000 UTC m=+1103.528033043" Nov 23 15:04:12 crc kubenswrapper[4718]: I1123 15:04:12.413608 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2kvx"] Nov 23 15:04:13 crc kubenswrapper[4718]: I1123 15:04:13.273824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" event={"ID":"371cc685-4543-4483-83bf-bb04f1d750b5","Type":"ContainerStarted","Data":"786c7aed7a2bc3d301bedc9a8377f85b18e5372e6c0fc3d62fb9c39b98483139"} Nov 23 15:04:13 crc kubenswrapper[4718]: I1123 15:04:13.274336 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" event={"ID":"371cc685-4543-4483-83bf-bb04f1d750b5","Type":"ContainerStarted","Data":"89bff08d6546eace816218055a699ef0b0ba04a38da17e83fadebf26b31f6493"} Nov 23 15:04:13 crc kubenswrapper[4718]: I1123 15:04:13.277374 4718 generic.go:334] "Generic (PLEG): container finished" podID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerID="5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868" exitCode=0 Nov 23 15:04:13 crc kubenswrapper[4718]: I1123 15:04:13.278200 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" event={"ID":"50b65a80-4058-4be6-b1c6-79e1fd2e081f","Type":"ContainerDied","Data":"5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868"} Nov 23 15:04:13 crc kubenswrapper[4718]: I1123 15:04:13.299949 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" podStartSLOduration=2.299934902 podStartE2EDuration="2.299934902s" podCreationTimestamp="2025-11-23 15:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:13.298966496 +0000 UTC m=+1104.538586340" watchObservedRunningTime="2025-11-23 15:04:13.299934902 +0000 UTC m=+1104.539554746" Nov 23 15:04:14 crc kubenswrapper[4718]: I1123 15:04:14.134520 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:14 crc kubenswrapper[4718]: I1123 15:04:14.144839 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.320760 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" event={"ID":"50b65a80-4058-4be6-b1c6-79e1fd2e081f","Type":"ContainerStarted","Data":"5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae"} Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.321354 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.322614 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"691bbae8-8ade-4e8b-86cf-3498eb24c347","Type":"ContainerStarted","Data":"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f"} Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.322662 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"691bbae8-8ade-4e8b-86cf-3498eb24c347","Type":"ContainerStarted","Data":"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5"} Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.322658 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-log" containerID="cri-o://00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5" gracePeriod=30 Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.322690 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-metadata" containerID="cri-o://8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f" gracePeriod=30 Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.325425 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"44ab2204-132f-4820-a33e-6707a02629fa","Type":"ContainerStarted","Data":"05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d"} Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.329927 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b40ce109-6c6a-4a75-8aa9-c749fa12cee3","Type":"ContainerStarted","Data":"209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043"} Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.330154 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b40ce109-6c6a-4a75-8aa9-c749fa12cee3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043" gracePeriod=30 Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.334610 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b26fa96-edcf-446b-8f92-4d7144c6186e","Type":"ContainerStarted","Data":"51586b95c3c9f835f15678f44426c95f13fb870c70271daf51e0bc9e69cab431"} Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.334647 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b26fa96-edcf-446b-8f92-4d7144c6186e","Type":"ContainerStarted","Data":"a2474818c53b7db9d82b08ed6cb80c2a867951212727a1710153ce3461c5c9bb"} Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.353757 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" podStartSLOduration=7.353738235 podStartE2EDuration="7.353738235s" podCreationTimestamp="2025-11-23 15:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:17.350769214 +0000 UTC m=+1108.590389058" watchObservedRunningTime="2025-11-23 15:04:17.353738235 +0000 UTC m=+1108.593358079" Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.373019 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.12421505 podStartE2EDuration="7.373001527s" podCreationTimestamp="2025-11-23 15:04:10 +0000 UTC" firstStartedPulling="2025-11-23 15:04:11.816352638 +0000 UTC m=+1103.055972482" lastFinishedPulling="2025-11-23 15:04:16.065139105 +0000 UTC m=+1107.304758959" observedRunningTime="2025-11-23 15:04:17.365253027 +0000 UTC m=+1108.604872881" watchObservedRunningTime="2025-11-23 15:04:17.373001527 +0000 UTC m=+1108.612621371" Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.384071 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.413016623 podStartE2EDuration="7.384049937s" podCreationTimestamp="2025-11-23 15:04:10 +0000 UTC" firstStartedPulling="2025-11-23 15:04:12.116582732 +0000 UTC m=+1103.356202576" lastFinishedPulling="2025-11-23 15:04:16.087616046 +0000 UTC m=+1107.327235890" observedRunningTime="2025-11-23 15:04:17.381268752 +0000 UTC m=+1108.620888606" watchObservedRunningTime="2025-11-23 15:04:17.384049937 +0000 UTC m=+1108.623669781" Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.405590 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.189585404 podStartE2EDuration="7.405567662s" podCreationTimestamp="2025-11-23 15:04:10 +0000 UTC" firstStartedPulling="2025-11-23 15:04:11.871867465 +0000 UTC m=+1103.111487309" lastFinishedPulling="2025-11-23 15:04:16.087849723 +0000 UTC m=+1107.327469567" observedRunningTime="2025-11-23 15:04:17.402557561 +0000 UTC m=+1108.642177405" watchObservedRunningTime="2025-11-23 15:04:17.405567662 +0000 UTC m=+1108.645187506" Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.421633 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.117880937 podStartE2EDuration="7.421617998s" podCreationTimestamp="2025-11-23 15:04:10 +0000 UTC" firstStartedPulling="2025-11-23 15:04:11.832053664 +0000 UTC m=+1103.071673508" lastFinishedPulling="2025-11-23 15:04:16.135790725 +0000 UTC m=+1107.375410569" observedRunningTime="2025-11-23 15:04:17.421189317 +0000 UTC m=+1108.660809161" watchObservedRunningTime="2025-11-23 15:04:17.421617998 +0000 UTC m=+1108.661237832" Nov 23 15:04:17 crc kubenswrapper[4718]: I1123 15:04:17.905144 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.036951 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6827\" (UniqueName: \"kubernetes.io/projected/691bbae8-8ade-4e8b-86cf-3498eb24c347-kube-api-access-p6827\") pod \"691bbae8-8ade-4e8b-86cf-3498eb24c347\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.036997 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/691bbae8-8ade-4e8b-86cf-3498eb24c347-logs\") pod \"691bbae8-8ade-4e8b-86cf-3498eb24c347\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.037055 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-config-data\") pod \"691bbae8-8ade-4e8b-86cf-3498eb24c347\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.037114 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-combined-ca-bundle\") pod \"691bbae8-8ade-4e8b-86cf-3498eb24c347\" (UID: \"691bbae8-8ade-4e8b-86cf-3498eb24c347\") " Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.037860 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691bbae8-8ade-4e8b-86cf-3498eb24c347-logs" (OuterVolumeSpecName: "logs") pod "691bbae8-8ade-4e8b-86cf-3498eb24c347" (UID: "691bbae8-8ade-4e8b-86cf-3498eb24c347"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.042491 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691bbae8-8ade-4e8b-86cf-3498eb24c347-kube-api-access-p6827" (OuterVolumeSpecName: "kube-api-access-p6827") pod "691bbae8-8ade-4e8b-86cf-3498eb24c347" (UID: "691bbae8-8ade-4e8b-86cf-3498eb24c347"). InnerVolumeSpecName "kube-api-access-p6827". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.071130 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "691bbae8-8ade-4e8b-86cf-3498eb24c347" (UID: "691bbae8-8ade-4e8b-86cf-3498eb24c347"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.074295 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-config-data" (OuterVolumeSpecName: "config-data") pod "691bbae8-8ade-4e8b-86cf-3498eb24c347" (UID: "691bbae8-8ade-4e8b-86cf-3498eb24c347"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.138425 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/691bbae8-8ade-4e8b-86cf-3498eb24c347-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.138485 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.138498 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691bbae8-8ade-4e8b-86cf-3498eb24c347-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.138516 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6827\" (UniqueName: \"kubernetes.io/projected/691bbae8-8ade-4e8b-86cf-3498eb24c347-kube-api-access-p6827\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.347010 4718 generic.go:334] "Generic (PLEG): container finished" podID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerID="8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f" exitCode=0 Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.347044 4718 generic.go:334] "Generic (PLEG): container finished" podID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerID="00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5" exitCode=143 Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.347978 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.354030 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"691bbae8-8ade-4e8b-86cf-3498eb24c347","Type":"ContainerDied","Data":"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f"} Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.354106 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"691bbae8-8ade-4e8b-86cf-3498eb24c347","Type":"ContainerDied","Data":"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5"} Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.354132 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"691bbae8-8ade-4e8b-86cf-3498eb24c347","Type":"ContainerDied","Data":"d3099dc040223d66a2666ab6e3496d3f003b6784a5386a0c31f04af6fc18345e"} Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.354156 4718 scope.go:117] "RemoveContainer" containerID="8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.390214 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.392182 4718 scope.go:117] "RemoveContainer" containerID="00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.401063 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.421572 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:18 crc kubenswrapper[4718]: E1123 15:04:18.421962 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-metadata" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.421977 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-metadata" Nov 23 15:04:18 crc kubenswrapper[4718]: E1123 15:04:18.421994 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-log" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.422000 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-log" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.422179 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-metadata" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.422207 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" containerName="nova-metadata-log" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.423148 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.426163 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.426368 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.427115 4718 scope.go:117] "RemoveContainer" containerID="8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f" Nov 23 15:04:18 crc kubenswrapper[4718]: E1123 15:04:18.427941 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f\": container with ID starting with 8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f not found: ID does not exist" containerID="8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.428086 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f"} err="failed to get container status \"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f\": rpc error: code = NotFound desc = could not find container \"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f\": container with ID starting with 8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f not found: ID does not exist" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.428222 4718 scope.go:117] "RemoveContainer" containerID="00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5" Nov 23 15:04:18 crc kubenswrapper[4718]: E1123 15:04:18.428612 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5\": container with ID starting with 00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5 not found: ID does not exist" containerID="00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.428647 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5"} err="failed to get container status \"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5\": rpc error: code = NotFound desc = could not find container \"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5\": container with ID starting with 00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5 not found: ID does not exist" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.428671 4718 scope.go:117] "RemoveContainer" containerID="8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.435705 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f"} err="failed to get container status \"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f\": rpc error: code = NotFound desc = could not find container \"8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f\": container with ID starting with 8ee55256bf3ad1c1332b7850c611cb6a954aae434f6d24efeedd4819db6b062f not found: ID does not exist" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.435765 4718 scope.go:117] "RemoveContainer" containerID="00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.436314 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5"} err="failed to get container status \"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5\": rpc error: code = NotFound desc = could not find container \"00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5\": container with ID starting with 00e2bfa3bf3d2e533a0040240fd3d3157d83babd8ec8e45a6e2b408e30c14ea5 not found: ID does not exist" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.447456 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f434856a-502b-47bd-9fd4-ea7b8d82c441-logs\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.447551 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.447644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-config-data\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.447714 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tdz\" (UniqueName: \"kubernetes.io/projected/f434856a-502b-47bd-9fd4-ea7b8d82c441-kube-api-access-w6tdz\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.447734 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.476735 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691bbae8-8ade-4e8b-86cf-3498eb24c347" path="/var/lib/kubelet/pods/691bbae8-8ade-4e8b-86cf-3498eb24c347/volumes" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.477726 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.549193 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.549275 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-config-data\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.549316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6tdz\" (UniqueName: \"kubernetes.io/projected/f434856a-502b-47bd-9fd4-ea7b8d82c441-kube-api-access-w6tdz\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.549331 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.549425 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f434856a-502b-47bd-9fd4-ea7b8d82c441-logs\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.550259 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f434856a-502b-47bd-9fd4-ea7b8d82c441-logs\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.554149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-config-data\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.554541 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.555558 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.571312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6tdz\" (UniqueName: \"kubernetes.io/projected/f434856a-502b-47bd-9fd4-ea7b8d82c441-kube-api-access-w6tdz\") pod \"nova-metadata-0\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " pod="openstack/nova-metadata-0" Nov 23 15:04:18 crc kubenswrapper[4718]: I1123 15:04:18.764038 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:19 crc kubenswrapper[4718]: I1123 15:04:19.359259 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.412651 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f434856a-502b-47bd-9fd4-ea7b8d82c441","Type":"ContainerStarted","Data":"513d43284e853a97aee95339980a14e87b090a2286135d9bff8133e5709d2155"} Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.413256 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f434856a-502b-47bd-9fd4-ea7b8d82c441","Type":"ContainerStarted","Data":"2dc3aaf4ea7c80b354d09583e283105c60bc14c7215004b12d3577dbc61f09ac"} Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.413270 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f434856a-502b-47bd-9fd4-ea7b8d82c441","Type":"ContainerStarted","Data":"29de13825d9b5f5773f34f6f403dedbb55aeee9f94b57d0ab7c076b0d4eaf155"} Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.415495 4718 generic.go:334] "Generic (PLEG): container finished" podID="c5d8481e-f2de-46c5-8b56-7c85e054378d" containerID="20bdfde85037bd0b202b821ceb6b80b89569184ba715667609480e9b298198ae" exitCode=0 Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.415522 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qtcsw" event={"ID":"c5d8481e-f2de-46c5-8b56-7c85e054378d","Type":"ContainerDied","Data":"20bdfde85037bd0b202b821ceb6b80b89569184ba715667609480e9b298198ae"} Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.440543 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.440526972 podStartE2EDuration="2.440526972s" podCreationTimestamp="2025-11-23 15:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:20.435591319 +0000 UTC m=+1111.675211163" watchObservedRunningTime="2025-11-23 15:04:20.440526972 +0000 UTC m=+1111.680146806" Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.761538 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.761633 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:04:20 crc kubenswrapper[4718]: I1123 15:04:20.966209 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.027853 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.028199 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.059578 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.260054 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.321818 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4h5th"] Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.322043 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" podUID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerName="dnsmasq-dns" containerID="cri-o://1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62" gracePeriod=10 Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.480019 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.846610 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.846677 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.953228 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:21 crc kubenswrapper[4718]: I1123 15:04:21.956381 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141312 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-swift-storage-0\") pod \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141383 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-config\") pod \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141415 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-scripts\") pod \"c5d8481e-f2de-46c5-8b56-7c85e054378d\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q76v\" (UniqueName: \"kubernetes.io/projected/c5d8481e-f2de-46c5-8b56-7c85e054378d-kube-api-access-2q76v\") pod \"c5d8481e-f2de-46c5-8b56-7c85e054378d\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-svc\") pod \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141571 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-nb\") pod \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141599 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdmfx\" (UniqueName: \"kubernetes.io/projected/251d390c-d74c-4a3f-9ec4-9f995abc4c24-kube-api-access-jdmfx\") pod \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141801 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-combined-ca-bundle\") pod \"c5d8481e-f2de-46c5-8b56-7c85e054378d\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141835 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-config-data\") pod \"c5d8481e-f2de-46c5-8b56-7c85e054378d\" (UID: \"c5d8481e-f2de-46c5-8b56-7c85e054378d\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.141862 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-sb\") pod \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\" (UID: \"251d390c-d74c-4a3f-9ec4-9f995abc4c24\") " Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.149743 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251d390c-d74c-4a3f-9ec4-9f995abc4c24-kube-api-access-jdmfx" (OuterVolumeSpecName: "kube-api-access-jdmfx") pod "251d390c-d74c-4a3f-9ec4-9f995abc4c24" (UID: "251d390c-d74c-4a3f-9ec4-9f995abc4c24"). InnerVolumeSpecName "kube-api-access-jdmfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.159749 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d8481e-f2de-46c5-8b56-7c85e054378d-kube-api-access-2q76v" (OuterVolumeSpecName: "kube-api-access-2q76v") pod "c5d8481e-f2de-46c5-8b56-7c85e054378d" (UID: "c5d8481e-f2de-46c5-8b56-7c85e054378d"). InnerVolumeSpecName "kube-api-access-2q76v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.162091 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-scripts" (OuterVolumeSpecName: "scripts") pod "c5d8481e-f2de-46c5-8b56-7c85e054378d" (UID: "c5d8481e-f2de-46c5-8b56-7c85e054378d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.211744 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-config-data" (OuterVolumeSpecName: "config-data") pod "c5d8481e-f2de-46c5-8b56-7c85e054378d" (UID: "c5d8481e-f2de-46c5-8b56-7c85e054378d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.212723 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5d8481e-f2de-46c5-8b56-7c85e054378d" (UID: "c5d8481e-f2de-46c5-8b56-7c85e054378d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.242392 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "251d390c-d74c-4a3f-9ec4-9f995abc4c24" (UID: "251d390c-d74c-4a3f-9ec4-9f995abc4c24"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.243895 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.243924 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.243992 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.244004 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d8481e-f2de-46c5-8b56-7c85e054378d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.244015 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q76v\" (UniqueName: \"kubernetes.io/projected/c5d8481e-f2de-46c5-8b56-7c85e054378d-kube-api-access-2q76v\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.244026 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdmfx\" (UniqueName: \"kubernetes.io/projected/251d390c-d74c-4a3f-9ec4-9f995abc4c24-kube-api-access-jdmfx\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.245366 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "251d390c-d74c-4a3f-9ec4-9f995abc4c24" (UID: "251d390c-d74c-4a3f-9ec4-9f995abc4c24"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.248713 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "251d390c-d74c-4a3f-9ec4-9f995abc4c24" (UID: "251d390c-d74c-4a3f-9ec4-9f995abc4c24"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.267566 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "251d390c-d74c-4a3f-9ec4-9f995abc4c24" (UID: "251d390c-d74c-4a3f-9ec4-9f995abc4c24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.270546 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-config" (OuterVolumeSpecName: "config") pod "251d390c-d74c-4a3f-9ec4-9f995abc4c24" (UID: "251d390c-d74c-4a3f-9ec4-9f995abc4c24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.345425 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.345479 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.345489 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.345497 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251d390c-d74c-4a3f-9ec4-9f995abc4c24-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.433833 4718 generic.go:334] "Generic (PLEG): container finished" podID="371cc685-4543-4483-83bf-bb04f1d750b5" containerID="786c7aed7a2bc3d301bedc9a8377f85b18e5372e6c0fc3d62fb9c39b98483139" exitCode=0 Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.433925 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" event={"ID":"371cc685-4543-4483-83bf-bb04f1d750b5","Type":"ContainerDied","Data":"786c7aed7a2bc3d301bedc9a8377f85b18e5372e6c0fc3d62fb9c39b98483139"} Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.436017 4718 generic.go:334] "Generic (PLEG): container finished" podID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerID="1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62" exitCode=0 Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.436079 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" event={"ID":"251d390c-d74c-4a3f-9ec4-9f995abc4c24","Type":"ContainerDied","Data":"1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62"} Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.436111 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.436135 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4h5th" event={"ID":"251d390c-d74c-4a3f-9ec4-9f995abc4c24","Type":"ContainerDied","Data":"31c94c5ac9bf5c9b735024d09eee350eccb30371d2b594f5e0186d0aa6fab3a2"} Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.436153 4718 scope.go:117] "RemoveContainer" containerID="1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.438073 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qtcsw" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.438835 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qtcsw" event={"ID":"c5d8481e-f2de-46c5-8b56-7c85e054378d","Type":"ContainerDied","Data":"d0b129e7cfa7dd9c2443f36ca03ad4ada61c1b2d6f352d85d64c194e7b3f85d4"} Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.438863 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b129e7cfa7dd9c2443f36ca03ad4ada61c1b2d6f352d85d64c194e7b3f85d4" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.540721 4718 scope.go:117] "RemoveContainer" containerID="d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.578765 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4h5th"] Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.584384 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4h5th"] Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.592398 4718 scope.go:117] "RemoveContainer" containerID="1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62" Nov 23 15:04:22 crc kubenswrapper[4718]: E1123 15:04:22.594642 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62\": container with ID starting with 1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62 not found: ID does not exist" containerID="1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.594687 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62"} err="failed to get container status \"1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62\": rpc error: code = NotFound desc = could not find container \"1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62\": container with ID starting with 1b799bcc2470081d06a138b4616aa75415a084d4e4f05373afbfb1cedd22bb62 not found: ID does not exist" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.594722 4718 scope.go:117] "RemoveContainer" containerID="d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949" Nov 23 15:04:22 crc kubenswrapper[4718]: E1123 15:04:22.594979 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949\": container with ID starting with d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949 not found: ID does not exist" containerID="d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.595010 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949"} err="failed to get container status \"d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949\": rpc error: code = NotFound desc = could not find container \"d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949\": container with ID starting with d7d8e0114cc485376e38ecb5de737423ba2b13c9bc4363876146b572f27e6949 not found: ID does not exist" Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.757810 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.838257 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.838492 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-log" containerID="cri-o://a2474818c53b7db9d82b08ed6cb80c2a867951212727a1710153ce3461c5c9bb" gracePeriod=30 Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.838613 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-api" containerID="cri-o://51586b95c3c9f835f15678f44426c95f13fb870c70271daf51e0bc9e69cab431" gracePeriod=30 Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.851281 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.851544 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-log" containerID="cri-o://2dc3aaf4ea7c80b354d09583e283105c60bc14c7215004b12d3577dbc61f09ac" gracePeriod=30 Nov 23 15:04:22 crc kubenswrapper[4718]: I1123 15:04:22.851614 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-metadata" containerID="cri-o://513d43284e853a97aee95339980a14e87b090a2286135d9bff8133e5709d2155" gracePeriod=30 Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.061729 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.061989 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.451372 4718 generic.go:334] "Generic (PLEG): container finished" podID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerID="513d43284e853a97aee95339980a14e87b090a2286135d9bff8133e5709d2155" exitCode=0 Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.451406 4718 generic.go:334] "Generic (PLEG): container finished" podID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerID="2dc3aaf4ea7c80b354d09583e283105c60bc14c7215004b12d3577dbc61f09ac" exitCode=143 Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.451447 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f434856a-502b-47bd-9fd4-ea7b8d82c441","Type":"ContainerDied","Data":"513d43284e853a97aee95339980a14e87b090a2286135d9bff8133e5709d2155"} Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.451504 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f434856a-502b-47bd-9fd4-ea7b8d82c441","Type":"ContainerDied","Data":"2dc3aaf4ea7c80b354d09583e283105c60bc14c7215004b12d3577dbc61f09ac"} Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.453961 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerID="a2474818c53b7db9d82b08ed6cb80c2a867951212727a1710153ce3461c5c9bb" exitCode=143 Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.454020 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b26fa96-edcf-446b-8f92-4d7144c6186e","Type":"ContainerDied","Data":"a2474818c53b7db9d82b08ed6cb80c2a867951212727a1710153ce3461c5c9bb"} Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.764867 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.764916 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.825566 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.979042 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-config-data\") pod \"371cc685-4543-4483-83bf-bb04f1d750b5\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.979400 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-scripts\") pod \"371cc685-4543-4483-83bf-bb04f1d750b5\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.979482 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq2r9\" (UniqueName: \"kubernetes.io/projected/371cc685-4543-4483-83bf-bb04f1d750b5-kube-api-access-mq2r9\") pod \"371cc685-4543-4483-83bf-bb04f1d750b5\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.979568 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-combined-ca-bundle\") pod \"371cc685-4543-4483-83bf-bb04f1d750b5\" (UID: \"371cc685-4543-4483-83bf-bb04f1d750b5\") " Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.985890 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-scripts" (OuterVolumeSpecName: "scripts") pod "371cc685-4543-4483-83bf-bb04f1d750b5" (UID: "371cc685-4543-4483-83bf-bb04f1d750b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:23 crc kubenswrapper[4718]: I1123 15:04:23.986042 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371cc685-4543-4483-83bf-bb04f1d750b5-kube-api-access-mq2r9" (OuterVolumeSpecName: "kube-api-access-mq2r9") pod "371cc685-4543-4483-83bf-bb04f1d750b5" (UID: "371cc685-4543-4483-83bf-bb04f1d750b5"). InnerVolumeSpecName "kube-api-access-mq2r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.014956 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-config-data" (OuterVolumeSpecName: "config-data") pod "371cc685-4543-4483-83bf-bb04f1d750b5" (UID: "371cc685-4543-4483-83bf-bb04f1d750b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.020604 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "371cc685-4543-4483-83bf-bb04f1d750b5" (UID: "371cc685-4543-4483-83bf-bb04f1d750b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.082138 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.082178 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq2r9\" (UniqueName: \"kubernetes.io/projected/371cc685-4543-4483-83bf-bb04f1d750b5-kube-api-access-mq2r9\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.082189 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.082199 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371cc685-4543-4483-83bf-bb04f1d750b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.107604 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.286206 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f434856a-502b-47bd-9fd4-ea7b8d82c441-logs\") pod \"f434856a-502b-47bd-9fd4-ea7b8d82c441\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.286297 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-combined-ca-bundle\") pod \"f434856a-502b-47bd-9fd4-ea7b8d82c441\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.286344 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6tdz\" (UniqueName: \"kubernetes.io/projected/f434856a-502b-47bd-9fd4-ea7b8d82c441-kube-api-access-w6tdz\") pod \"f434856a-502b-47bd-9fd4-ea7b8d82c441\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.286376 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-config-data\") pod \"f434856a-502b-47bd-9fd4-ea7b8d82c441\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.286524 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-nova-metadata-tls-certs\") pod \"f434856a-502b-47bd-9fd4-ea7b8d82c441\" (UID: \"f434856a-502b-47bd-9fd4-ea7b8d82c441\") " Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.286580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f434856a-502b-47bd-9fd4-ea7b8d82c441-logs" (OuterVolumeSpecName: "logs") pod "f434856a-502b-47bd-9fd4-ea7b8d82c441" (UID: "f434856a-502b-47bd-9fd4-ea7b8d82c441"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.286938 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f434856a-502b-47bd-9fd4-ea7b8d82c441-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.290833 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f434856a-502b-47bd-9fd4-ea7b8d82c441-kube-api-access-w6tdz" (OuterVolumeSpecName: "kube-api-access-w6tdz") pod "f434856a-502b-47bd-9fd4-ea7b8d82c441" (UID: "f434856a-502b-47bd-9fd4-ea7b8d82c441"). InnerVolumeSpecName "kube-api-access-w6tdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.339246 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-config-data" (OuterVolumeSpecName: "config-data") pod "f434856a-502b-47bd-9fd4-ea7b8d82c441" (UID: "f434856a-502b-47bd-9fd4-ea7b8d82c441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.339681 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f434856a-502b-47bd-9fd4-ea7b8d82c441" (UID: "f434856a-502b-47bd-9fd4-ea7b8d82c441"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.377532 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f434856a-502b-47bd-9fd4-ea7b8d82c441" (UID: "f434856a-502b-47bd-9fd4-ea7b8d82c441"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.391080 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.391124 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6tdz\" (UniqueName: \"kubernetes.io/projected/f434856a-502b-47bd-9fd4-ea7b8d82c441-kube-api-access-w6tdz\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.391139 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.391150 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f434856a-502b-47bd-9fd4-ea7b8d82c441-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.454927 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" path="/var/lib/kubelet/pods/251d390c-d74c-4a3f-9ec4-9f995abc4c24/volumes" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.480560 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f434856a-502b-47bd-9fd4-ea7b8d82c441","Type":"ContainerDied","Data":"29de13825d9b5f5773f34f6f403dedbb55aeee9f94b57d0ab7c076b0d4eaf155"} Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.480644 4718 scope.go:117] "RemoveContainer" containerID="513d43284e853a97aee95339980a14e87b090a2286135d9bff8133e5709d2155" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.480771 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.487750 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="44ab2204-132f-4820-a33e-6707a02629fa" containerName="nova-scheduler-scheduler" containerID="cri-o://05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" gracePeriod=30 Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.488122 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.488160 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2kvx" event={"ID":"371cc685-4543-4483-83bf-bb04f1d750b5","Type":"ContainerDied","Data":"89bff08d6546eace816218055a699ef0b0ba04a38da17e83fadebf26b31f6493"} Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.488179 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89bff08d6546eace816218055a699ef0b0ba04a38da17e83fadebf26b31f6493" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.533039 4718 scope.go:117] "RemoveContainer" containerID="2dc3aaf4ea7c80b354d09583e283105c60bc14c7215004b12d3577dbc61f09ac" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.546724 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.557504 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.564765 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 15:04:24 crc kubenswrapper[4718]: E1123 15:04:24.565167 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerName="init" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565188 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerName="init" Nov 23 15:04:24 crc kubenswrapper[4718]: E1123 15:04:24.565206 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d8481e-f2de-46c5-8b56-7c85e054378d" containerName="nova-manage" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565213 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d8481e-f2de-46c5-8b56-7c85e054378d" containerName="nova-manage" Nov 23 15:04:24 crc kubenswrapper[4718]: E1123 15:04:24.565228 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-metadata" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565234 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-metadata" Nov 23 15:04:24 crc kubenswrapper[4718]: E1123 15:04:24.565243 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371cc685-4543-4483-83bf-bb04f1d750b5" containerName="nova-cell1-conductor-db-sync" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565249 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="371cc685-4543-4483-83bf-bb04f1d750b5" containerName="nova-cell1-conductor-db-sync" Nov 23 15:04:24 crc kubenswrapper[4718]: E1123 15:04:24.565260 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerName="dnsmasq-dns" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565267 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerName="dnsmasq-dns" Nov 23 15:04:24 crc kubenswrapper[4718]: E1123 15:04:24.565280 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-log" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565286 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-log" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565545 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d8481e-f2de-46c5-8b56-7c85e054378d" containerName="nova-manage" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565579 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-metadata" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565590 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="371cc685-4543-4483-83bf-bb04f1d750b5" containerName="nova-cell1-conductor-db-sync" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565606 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" containerName="nova-metadata-log" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.565616 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="251d390c-d74c-4a3f-9ec4-9f995abc4c24" containerName="dnsmasq-dns" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.566225 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.570643 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.572132 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.573668 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.579115 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.579355 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.581527 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.599936 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:24 crc kubenswrapper[4718]: E1123 15:04:24.624138 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf434856a_502b_47bd_9fd4_ea7b8d82c441.slice/crio-29de13825d9b5f5773f34f6f403dedbb55aeee9f94b57d0ab7c076b0d4eaf155\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod371cc685_4543_4483_83bf_bb04f1d750b5.slice\": RecentStats: unable to find data in memory cache]" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.700516 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/427b4910-7814-4a43-8b0a-b91b05aed240-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.700591 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrqp\" (UniqueName: \"kubernetes.io/projected/58a07cc4-616d-45d0-a935-1a00d23f3d71-kube-api-access-kkrqp\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.700671 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/427b4910-7814-4a43-8b0a-b91b05aed240-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.700702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-config-data\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.700728 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58a07cc4-616d-45d0-a935-1a00d23f3d71-logs\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.700821 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jsrv\" (UniqueName: \"kubernetes.io/projected/427b4910-7814-4a43-8b0a-b91b05aed240-kube-api-access-2jsrv\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.700950 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.701086 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803100 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803483 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/427b4910-7814-4a43-8b0a-b91b05aed240-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803533 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrqp\" (UniqueName: \"kubernetes.io/projected/58a07cc4-616d-45d0-a935-1a00d23f3d71-kube-api-access-kkrqp\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/427b4910-7814-4a43-8b0a-b91b05aed240-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803631 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-config-data\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803655 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58a07cc4-616d-45d0-a935-1a00d23f3d71-logs\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.803679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jsrv\" (UniqueName: \"kubernetes.io/projected/427b4910-7814-4a43-8b0a-b91b05aed240-kube-api-access-2jsrv\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.804783 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58a07cc4-616d-45d0-a935-1a00d23f3d71-logs\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.809214 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-config-data\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.809235 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/427b4910-7814-4a43-8b0a-b91b05aed240-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.809535 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/427b4910-7814-4a43-8b0a-b91b05aed240-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.809599 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.809935 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.822837 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrqp\" (UniqueName: \"kubernetes.io/projected/58a07cc4-616d-45d0-a935-1a00d23f3d71-kube-api-access-kkrqp\") pod \"nova-metadata-0\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " pod="openstack/nova-metadata-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.829584 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jsrv\" (UniqueName: \"kubernetes.io/projected/427b4910-7814-4a43-8b0a-b91b05aed240-kube-api-access-2jsrv\") pod \"nova-cell1-conductor-0\" (UID: \"427b4910-7814-4a43-8b0a-b91b05aed240\") " pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.886579 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:24 crc kubenswrapper[4718]: I1123 15:04:24.901121 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:04:25 crc kubenswrapper[4718]: W1123 15:04:25.324113 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod427b4910_7814_4a43_8b0a_b91b05aed240.slice/crio-674f24f3b2dca730ac485d4054c619f139dc70dceb07f3a88f8ac1f1c1699628 WatchSource:0}: Error finding container 674f24f3b2dca730ac485d4054c619f139dc70dceb07f3a88f8ac1f1c1699628: Status 404 returned error can't find the container with id 674f24f3b2dca730ac485d4054c619f139dc70dceb07f3a88f8ac1f1c1699628 Nov 23 15:04:25 crc kubenswrapper[4718]: I1123 15:04:25.336969 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 15:04:25 crc kubenswrapper[4718]: I1123 15:04:25.397320 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:04:25 crc kubenswrapper[4718]: W1123 15:04:25.409622 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a07cc4_616d_45d0_a935_1a00d23f3d71.slice/crio-d8f6bc69bf448c9acedac811258d5a833be20f452d390462b575baba8b362a14 WatchSource:0}: Error finding container d8f6bc69bf448c9acedac811258d5a833be20f452d390462b575baba8b362a14: Status 404 returned error can't find the container with id d8f6bc69bf448c9acedac811258d5a833be20f452d390462b575baba8b362a14 Nov 23 15:04:25 crc kubenswrapper[4718]: I1123 15:04:25.499194 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58a07cc4-616d-45d0-a935-1a00d23f3d71","Type":"ContainerStarted","Data":"d8f6bc69bf448c9acedac811258d5a833be20f452d390462b575baba8b362a14"} Nov 23 15:04:25 crc kubenswrapper[4718]: I1123 15:04:25.500069 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"427b4910-7814-4a43-8b0a-b91b05aed240","Type":"ContainerStarted","Data":"674f24f3b2dca730ac485d4054c619f139dc70dceb07f3a88f8ac1f1c1699628"} Nov 23 15:04:26 crc kubenswrapper[4718]: E1123 15:04:26.030294 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 15:04:26 crc kubenswrapper[4718]: E1123 15:04:26.031800 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 15:04:26 crc kubenswrapper[4718]: E1123 15:04:26.034883 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 15:04:26 crc kubenswrapper[4718]: E1123 15:04:26.034950 4718 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="44ab2204-132f-4820-a33e-6707a02629fa" containerName="nova-scheduler-scheduler" Nov 23 15:04:26 crc kubenswrapper[4718]: I1123 15:04:26.455020 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f434856a-502b-47bd-9fd4-ea7b8d82c441" path="/var/lib/kubelet/pods/f434856a-502b-47bd-9fd4-ea7b8d82c441/volumes" Nov 23 15:04:26 crc kubenswrapper[4718]: I1123 15:04:26.519239 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58a07cc4-616d-45d0-a935-1a00d23f3d71","Type":"ContainerStarted","Data":"6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b"} Nov 23 15:04:26 crc kubenswrapper[4718]: I1123 15:04:26.519292 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58a07cc4-616d-45d0-a935-1a00d23f3d71","Type":"ContainerStarted","Data":"aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc"} Nov 23 15:04:26 crc kubenswrapper[4718]: I1123 15:04:26.521251 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"427b4910-7814-4a43-8b0a-b91b05aed240","Type":"ContainerStarted","Data":"ddcb206ebb19e4d4e7a808edc7ffa57d7ac7011a819192a050c392312a5dba62"} Nov 23 15:04:26 crc kubenswrapper[4718]: I1123 15:04:26.522245 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:26 crc kubenswrapper[4718]: I1123 15:04:26.544169 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.544153429 podStartE2EDuration="2.544153429s" podCreationTimestamp="2025-11-23 15:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:26.539617885 +0000 UTC m=+1117.779237739" watchObservedRunningTime="2025-11-23 15:04:26.544153429 +0000 UTC m=+1117.783773273" Nov 23 15:04:26 crc kubenswrapper[4718]: I1123 15:04:26.574087 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.574069351 podStartE2EDuration="2.574069351s" podCreationTimestamp="2025-11-23 15:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:26.569823396 +0000 UTC m=+1117.809443270" watchObservedRunningTime="2025-11-23 15:04:26.574069351 +0000 UTC m=+1117.813689195" Nov 23 15:04:27 crc kubenswrapper[4718]: I1123 15:04:27.533099 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerID="51586b95c3c9f835f15678f44426c95f13fb870c70271daf51e0bc9e69cab431" exitCode=0 Nov 23 15:04:27 crc kubenswrapper[4718]: I1123 15:04:27.533139 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b26fa96-edcf-446b-8f92-4d7144c6186e","Type":"ContainerDied","Data":"51586b95c3c9f835f15678f44426c95f13fb870c70271daf51e0bc9e69cab431"} Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.204329 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.350098 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.394594 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dp5h\" (UniqueName: \"kubernetes.io/projected/1b26fa96-edcf-446b-8f92-4d7144c6186e-kube-api-access-7dp5h\") pod \"1b26fa96-edcf-446b-8f92-4d7144c6186e\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.394652 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-combined-ca-bundle\") pod \"1b26fa96-edcf-446b-8f92-4d7144c6186e\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.394788 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b26fa96-edcf-446b-8f92-4d7144c6186e-logs\") pod \"1b26fa96-edcf-446b-8f92-4d7144c6186e\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.394814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-config-data\") pod \"1b26fa96-edcf-446b-8f92-4d7144c6186e\" (UID: \"1b26fa96-edcf-446b-8f92-4d7144c6186e\") " Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.398832 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b26fa96-edcf-446b-8f92-4d7144c6186e-logs" (OuterVolumeSpecName: "logs") pod "1b26fa96-edcf-446b-8f92-4d7144c6186e" (UID: "1b26fa96-edcf-446b-8f92-4d7144c6186e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.414580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b26fa96-edcf-446b-8f92-4d7144c6186e-kube-api-access-7dp5h" (OuterVolumeSpecName: "kube-api-access-7dp5h") pod "1b26fa96-edcf-446b-8f92-4d7144c6186e" (UID: "1b26fa96-edcf-446b-8f92-4d7144c6186e"). InnerVolumeSpecName "kube-api-access-7dp5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.429985 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b26fa96-edcf-446b-8f92-4d7144c6186e" (UID: "1b26fa96-edcf-446b-8f92-4d7144c6186e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.437348 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-config-data" (OuterVolumeSpecName: "config-data") pod "1b26fa96-edcf-446b-8f92-4d7144c6186e" (UID: "1b26fa96-edcf-446b-8f92-4d7144c6186e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.496790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbwdp\" (UniqueName: \"kubernetes.io/projected/44ab2204-132f-4820-a33e-6707a02629fa-kube-api-access-pbwdp\") pod \"44ab2204-132f-4820-a33e-6707a02629fa\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.496891 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-combined-ca-bundle\") pod \"44ab2204-132f-4820-a33e-6707a02629fa\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.496953 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-config-data\") pod \"44ab2204-132f-4820-a33e-6707a02629fa\" (UID: \"44ab2204-132f-4820-a33e-6707a02629fa\") " Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.497787 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b26fa96-edcf-446b-8f92-4d7144c6186e-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.497817 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.497834 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dp5h\" (UniqueName: \"kubernetes.io/projected/1b26fa96-edcf-446b-8f92-4d7144c6186e-kube-api-access-7dp5h\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.497854 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b26fa96-edcf-446b-8f92-4d7144c6186e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.500692 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ab2204-132f-4820-a33e-6707a02629fa-kube-api-access-pbwdp" (OuterVolumeSpecName: "kube-api-access-pbwdp") pod "44ab2204-132f-4820-a33e-6707a02629fa" (UID: "44ab2204-132f-4820-a33e-6707a02629fa"). InnerVolumeSpecName "kube-api-access-pbwdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.528504 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44ab2204-132f-4820-a33e-6707a02629fa" (UID: "44ab2204-132f-4820-a33e-6707a02629fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.533003 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-config-data" (OuterVolumeSpecName: "config-data") pod "44ab2204-132f-4820-a33e-6707a02629fa" (UID: "44ab2204-132f-4820-a33e-6707a02629fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.551696 4718 generic.go:334] "Generic (PLEG): container finished" podID="44ab2204-132f-4820-a33e-6707a02629fa" containerID="05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" exitCode=0 Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.551771 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"44ab2204-132f-4820-a33e-6707a02629fa","Type":"ContainerDied","Data":"05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d"} Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.551803 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"44ab2204-132f-4820-a33e-6707a02629fa","Type":"ContainerDied","Data":"4272e6dcda40d2eeb2b1fbef0738b3d6e6d9436e6318b7d8c3fc91477236eca6"} Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.551821 4718 scope.go:117] "RemoveContainer" containerID="05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.551928 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.564878 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.564928 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b26fa96-edcf-446b-8f92-4d7144c6186e","Type":"ContainerDied","Data":"5b9548041b3ce5d63cd622a4f4c1603fc7b84688b8b93c4c700b338c94e65ac6"} Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.590258 4718 scope.go:117] "RemoveContainer" containerID="05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" Nov 23 15:04:28 crc kubenswrapper[4718]: E1123 15:04:28.590750 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d\": container with ID starting with 05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d not found: ID does not exist" containerID="05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.590779 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d"} err="failed to get container status \"05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d\": rpc error: code = NotFound desc = could not find container \"05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d\": container with ID starting with 05ffe1a73a429e91bf6de071e8cd2d59b2b1b734d6dbf858889f7f717f5eff8d not found: ID does not exist" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.590800 4718 scope.go:117] "RemoveContainer" containerID="51586b95c3c9f835f15678f44426c95f13fb870c70271daf51e0bc9e69cab431" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.594724 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.601077 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.601119 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ab2204-132f-4820-a33e-6707a02629fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.601132 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbwdp\" (UniqueName: \"kubernetes.io/projected/44ab2204-132f-4820-a33e-6707a02629fa-kube-api-access-pbwdp\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.605377 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.614752 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.619402 4718 scope.go:117] "RemoveContainer" containerID="a2474818c53b7db9d82b08ed6cb80c2a867951212727a1710153ce3461c5c9bb" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.623026 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.631752 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: E1123 15:04:28.632426 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-api" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.632538 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-api" Nov 23 15:04:28 crc kubenswrapper[4718]: E1123 15:04:28.632637 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ab2204-132f-4820-a33e-6707a02629fa" containerName="nova-scheduler-scheduler" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.632731 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ab2204-132f-4820-a33e-6707a02629fa" containerName="nova-scheduler-scheduler" Nov 23 15:04:28 crc kubenswrapper[4718]: E1123 15:04:28.632803 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-log" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.632865 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-log" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.633083 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ab2204-132f-4820-a33e-6707a02629fa" containerName="nova-scheduler-scheduler" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.633158 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-api" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.633225 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" containerName="nova-api-log" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.634009 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.635979 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.640296 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.648255 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.650211 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.652459 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.668546 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.804204 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-config-data\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.804288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mf4\" (UniqueName: \"kubernetes.io/projected/370e8e20-e2e6-4428-8fb9-0e03617d8d97-kube-api-access-t5mf4\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.804334 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2bvq\" (UniqueName: \"kubernetes.io/projected/9f08df78-9b3f-42cb-811f-468c669c59f2-kube-api-access-h2bvq\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.804369 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.804398 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370e8e20-e2e6-4428-8fb9-0e03617d8d97-logs\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.804420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.804627 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-config-data\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.907133 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-config-data\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.907313 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-config-data\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.907407 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mf4\" (UniqueName: \"kubernetes.io/projected/370e8e20-e2e6-4428-8fb9-0e03617d8d97-kube-api-access-t5mf4\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.908190 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2bvq\" (UniqueName: \"kubernetes.io/projected/9f08df78-9b3f-42cb-811f-468c669c59f2-kube-api-access-h2bvq\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.908897 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.909075 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370e8e20-e2e6-4428-8fb9-0e03617d8d97-logs\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.910181 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370e8e20-e2e6-4428-8fb9-0e03617d8d97-logs\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.910385 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.911263 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-config-data\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.913100 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-config-data\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.913242 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.913767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.925177 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mf4\" (UniqueName: \"kubernetes.io/projected/370e8e20-e2e6-4428-8fb9-0e03617d8d97-kube-api-access-t5mf4\") pod \"nova-api-0\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " pod="openstack/nova-api-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.929768 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2bvq\" (UniqueName: \"kubernetes.io/projected/9f08df78-9b3f-42cb-811f-468c669c59f2-kube-api-access-h2bvq\") pod \"nova-scheduler-0\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.953434 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:04:28 crc kubenswrapper[4718]: I1123 15:04:28.975160 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:29 crc kubenswrapper[4718]: I1123 15:04:29.455337 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:29 crc kubenswrapper[4718]: W1123 15:04:29.464392 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370e8e20_e2e6_4428_8fb9_0e03617d8d97.slice/crio-110b1034d590373d1eccb5f9c09f2ff98399d34a5396e8c6adc0f5ef7bed483f WatchSource:0}: Error finding container 110b1034d590373d1eccb5f9c09f2ff98399d34a5396e8c6adc0f5ef7bed483f: Status 404 returned error can't find the container with id 110b1034d590373d1eccb5f9c09f2ff98399d34a5396e8c6adc0f5ef7bed483f Nov 23 15:04:29 crc kubenswrapper[4718]: W1123 15:04:29.518348 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f08df78_9b3f_42cb_811f_468c669c59f2.slice/crio-3cf9eb189c2d7b22dc96c18889bec3d34839674e90cf23e0cc3a28138b421162 WatchSource:0}: Error finding container 3cf9eb189c2d7b22dc96c18889bec3d34839674e90cf23e0cc3a28138b421162: Status 404 returned error can't find the container with id 3cf9eb189c2d7b22dc96c18889bec3d34839674e90cf23e0cc3a28138b421162 Nov 23 15:04:29 crc kubenswrapper[4718]: I1123 15:04:29.519757 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:04:29 crc kubenswrapper[4718]: I1123 15:04:29.577553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f08df78-9b3f-42cb-811f-468c669c59f2","Type":"ContainerStarted","Data":"3cf9eb189c2d7b22dc96c18889bec3d34839674e90cf23e0cc3a28138b421162"} Nov 23 15:04:29 crc kubenswrapper[4718]: I1123 15:04:29.580761 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"370e8e20-e2e6-4428-8fb9-0e03617d8d97","Type":"ContainerStarted","Data":"110b1034d590373d1eccb5f9c09f2ff98399d34a5396e8c6adc0f5ef7bed483f"} Nov 23 15:04:29 crc kubenswrapper[4718]: I1123 15:04:29.902650 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 15:04:29 crc kubenswrapper[4718]: I1123 15:04:29.903062 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 15:04:30 crc kubenswrapper[4718]: I1123 15:04:30.456888 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b26fa96-edcf-446b-8f92-4d7144c6186e" path="/var/lib/kubelet/pods/1b26fa96-edcf-446b-8f92-4d7144c6186e/volumes" Nov 23 15:04:30 crc kubenswrapper[4718]: I1123 15:04:30.457557 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ab2204-132f-4820-a33e-6707a02629fa" path="/var/lib/kubelet/pods/44ab2204-132f-4820-a33e-6707a02629fa/volumes" Nov 23 15:04:30 crc kubenswrapper[4718]: I1123 15:04:30.594800 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f08df78-9b3f-42cb-811f-468c669c59f2","Type":"ContainerStarted","Data":"7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08"} Nov 23 15:04:30 crc kubenswrapper[4718]: I1123 15:04:30.599177 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"370e8e20-e2e6-4428-8fb9-0e03617d8d97","Type":"ContainerStarted","Data":"f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28"} Nov 23 15:04:30 crc kubenswrapper[4718]: I1123 15:04:30.599214 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"370e8e20-e2e6-4428-8fb9-0e03617d8d97","Type":"ContainerStarted","Data":"d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f"} Nov 23 15:04:30 crc kubenswrapper[4718]: I1123 15:04:30.625554 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.62552468 podStartE2EDuration="2.62552468s" podCreationTimestamp="2025-11-23 15:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:30.614218013 +0000 UTC m=+1121.853837877" watchObservedRunningTime="2025-11-23 15:04:30.62552468 +0000 UTC m=+1121.865144534" Nov 23 15:04:30 crc kubenswrapper[4718]: I1123 15:04:30.642459 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.642415169 podStartE2EDuration="2.642415169s" podCreationTimestamp="2025-11-23 15:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:30.63695218 +0000 UTC m=+1121.876572024" watchObservedRunningTime="2025-11-23 15:04:30.642415169 +0000 UTC m=+1121.882035013" Nov 23 15:04:33 crc kubenswrapper[4718]: I1123 15:04:33.954576 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 15:04:34 crc kubenswrapper[4718]: I1123 15:04:34.901888 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 15:04:34 crc kubenswrapper[4718]: I1123 15:04:34.902297 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 15:04:34 crc kubenswrapper[4718]: I1123 15:04:34.916352 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 15:04:35 crc kubenswrapper[4718]: I1123 15:04:35.533488 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 15:04:35 crc kubenswrapper[4718]: I1123 15:04:35.918215 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 15:04:35 crc kubenswrapper[4718]: I1123 15:04:35.918555 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 15:04:38 crc kubenswrapper[4718]: I1123 15:04:38.954300 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 15:04:38 crc kubenswrapper[4718]: I1123 15:04:38.976543 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:04:38 crc kubenswrapper[4718]: I1123 15:04:38.977585 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:04:38 crc kubenswrapper[4718]: I1123 15:04:38.993371 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 15:04:39 crc kubenswrapper[4718]: I1123 15:04:39.713362 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 15:04:39 crc kubenswrapper[4718]: I1123 15:04:39.713897 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7309bec0-7ad8-47d8-8f72-ba8944a161e2" containerName="kube-state-metrics" containerID="cri-o://4d189af6aaaa28c4f6206b731bf20e4da52974ba5c8be4e5a72009ab629998c4" gracePeriod=30 Nov 23 15:04:39 crc kubenswrapper[4718]: I1123 15:04:39.734587 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.058637 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.058637 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.735556 4718 generic.go:334] "Generic (PLEG): container finished" podID="7309bec0-7ad8-47d8-8f72-ba8944a161e2" containerID="4d189af6aaaa28c4f6206b731bf20e4da52974ba5c8be4e5a72009ab629998c4" exitCode=2 Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.735632 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7309bec0-7ad8-47d8-8f72-ba8944a161e2","Type":"ContainerDied","Data":"4d189af6aaaa28c4f6206b731bf20e4da52974ba5c8be4e5a72009ab629998c4"} Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.736232 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7309bec0-7ad8-47d8-8f72-ba8944a161e2","Type":"ContainerDied","Data":"7f52effd6a639f019cca851c2a43aa50badf5bb8c55afcb3e86754240ae4f44f"} Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.736245 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f52effd6a639f019cca851c2a43aa50badf5bb8c55afcb3e86754240ae4f44f" Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.798083 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.866452 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28lf5\" (UniqueName: \"kubernetes.io/projected/7309bec0-7ad8-47d8-8f72-ba8944a161e2-kube-api-access-28lf5\") pod \"7309bec0-7ad8-47d8-8f72-ba8944a161e2\" (UID: \"7309bec0-7ad8-47d8-8f72-ba8944a161e2\") " Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.889636 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7309bec0-7ad8-47d8-8f72-ba8944a161e2-kube-api-access-28lf5" (OuterVolumeSpecName: "kube-api-access-28lf5") pod "7309bec0-7ad8-47d8-8f72-ba8944a161e2" (UID: "7309bec0-7ad8-47d8-8f72-ba8944a161e2"). InnerVolumeSpecName "kube-api-access-28lf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:40 crc kubenswrapper[4718]: I1123 15:04:40.967682 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28lf5\" (UniqueName: \"kubernetes.io/projected/7309bec0-7ad8-47d8-8f72-ba8944a161e2-kube-api-access-28lf5\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.712262 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.712923 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-central-agent" containerID="cri-o://3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992" gracePeriod=30 Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.712998 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="sg-core" containerID="cri-o://733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240" gracePeriod=30 Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.713031 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-notification-agent" containerID="cri-o://e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee" gracePeriod=30 Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.712999 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="proxy-httpd" containerID="cri-o://ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155" gracePeriod=30 Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.743614 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.775202 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.784674 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.796007 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 15:04:41 crc kubenswrapper[4718]: E1123 15:04:41.796363 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7309bec0-7ad8-47d8-8f72-ba8944a161e2" containerName="kube-state-metrics" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.796379 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7309bec0-7ad8-47d8-8f72-ba8944a161e2" containerName="kube-state-metrics" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.796561 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7309bec0-7ad8-47d8-8f72-ba8944a161e2" containerName="kube-state-metrics" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.797169 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.799379 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.800720 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.806652 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.987610 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd84g\" (UniqueName: \"kubernetes.io/projected/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-api-access-jd84g\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.987654 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.987681 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:41 crc kubenswrapper[4718]: I1123 15:04:41.987736 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.089416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd84g\" (UniqueName: \"kubernetes.io/projected/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-api-access-jd84g\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.089484 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.089508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.089551 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.094299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.094787 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.100619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c11acb-21ce-4e87-baab-f6f765d508cf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.116172 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd84g\" (UniqueName: \"kubernetes.io/projected/34c11acb-21ce-4e87-baab-f6f765d508cf-kube-api-access-jd84g\") pod \"kube-state-metrics-0\" (UID: \"34c11acb-21ce-4e87-baab-f6f765d508cf\") " pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.415865 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.457922 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7309bec0-7ad8-47d8-8f72-ba8944a161e2" path="/var/lib/kubelet/pods/7309bec0-7ad8-47d8-8f72-ba8944a161e2/volumes" Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.765414 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.812560 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e0372d5-8440-4058-964b-0d1b2023c706" containerID="ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155" exitCode=0 Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.812583 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e0372d5-8440-4058-964b-0d1b2023c706" containerID="733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240" exitCode=2 Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.812590 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e0372d5-8440-4058-964b-0d1b2023c706" containerID="3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992" exitCode=0 Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.812613 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerDied","Data":"ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155"} Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.812643 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerDied","Data":"733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240"} Nov 23 15:04:42 crc kubenswrapper[4718]: I1123 15:04:42.812656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerDied","Data":"3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992"} Nov 23 15:04:43 crc kubenswrapper[4718]: I1123 15:04:43.822388 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"34c11acb-21ce-4e87-baab-f6f765d508cf","Type":"ContainerStarted","Data":"66e98ff624186efca1cd4e1a437b1798ebbeb7a361b0f4f153451e079e76c3e6"} Nov 23 15:04:43 crc kubenswrapper[4718]: I1123 15:04:43.822834 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 15:04:43 crc kubenswrapper[4718]: I1123 15:04:43.822850 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"34c11acb-21ce-4e87-baab-f6f765d508cf","Type":"ContainerStarted","Data":"a28976d0a7f3f36208799c77e3120a98f6896f0f2f8515439bd403c377f04558"} Nov 23 15:04:44 crc kubenswrapper[4718]: I1123 15:04:44.907280 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 15:04:44 crc kubenswrapper[4718]: I1123 15:04:44.909464 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 15:04:44 crc kubenswrapper[4718]: I1123 15:04:44.912948 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 15:04:44 crc kubenswrapper[4718]: I1123 15:04:44.928872 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.504787814 podStartE2EDuration="3.928849901s" podCreationTimestamp="2025-11-23 15:04:41 +0000 UTC" firstStartedPulling="2025-11-23 15:04:42.807705181 +0000 UTC m=+1134.047325025" lastFinishedPulling="2025-11-23 15:04:43.231767248 +0000 UTC m=+1134.471387112" observedRunningTime="2025-11-23 15:04:43.850072052 +0000 UTC m=+1135.089691916" watchObservedRunningTime="2025-11-23 15:04:44.928849901 +0000 UTC m=+1136.168469745" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.131785 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e0372d5_8440_4058_964b_0d1b2023c706.slice/crio-e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee.scope\": RecentStats: unable to find data in memory cache]" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.495905 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.582852 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-sg-core-conf-yaml\") pod \"4e0372d5-8440-4058-964b-0d1b2023c706\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.583001 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-log-httpd\") pod \"4e0372d5-8440-4058-964b-0d1b2023c706\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.583043 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-combined-ca-bundle\") pod \"4e0372d5-8440-4058-964b-0d1b2023c706\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.583070 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-scripts\") pod \"4e0372d5-8440-4058-964b-0d1b2023c706\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.583137 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5khzc\" (UniqueName: \"kubernetes.io/projected/4e0372d5-8440-4058-964b-0d1b2023c706-kube-api-access-5khzc\") pod \"4e0372d5-8440-4058-964b-0d1b2023c706\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.583168 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-config-data\") pod \"4e0372d5-8440-4058-964b-0d1b2023c706\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.583259 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-run-httpd\") pod \"4e0372d5-8440-4058-964b-0d1b2023c706\" (UID: \"4e0372d5-8440-4058-964b-0d1b2023c706\") " Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.584173 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e0372d5-8440-4058-964b-0d1b2023c706" (UID: "4e0372d5-8440-4058-964b-0d1b2023c706"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.585809 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e0372d5-8440-4058-964b-0d1b2023c706" (UID: "4e0372d5-8440-4058-964b-0d1b2023c706"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.589446 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-scripts" (OuterVolumeSpecName: "scripts") pod "4e0372d5-8440-4058-964b-0d1b2023c706" (UID: "4e0372d5-8440-4058-964b-0d1b2023c706"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.598863 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0372d5-8440-4058-964b-0d1b2023c706-kube-api-access-5khzc" (OuterVolumeSpecName: "kube-api-access-5khzc") pod "4e0372d5-8440-4058-964b-0d1b2023c706" (UID: "4e0372d5-8440-4058-964b-0d1b2023c706"). InnerVolumeSpecName "kube-api-access-5khzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.633050 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e0372d5-8440-4058-964b-0d1b2023c706" (UID: "4e0372d5-8440-4058-964b-0d1b2023c706"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.683277 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e0372d5-8440-4058-964b-0d1b2023c706" (UID: "4e0372d5-8440-4058-964b-0d1b2023c706"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.685313 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.685350 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.685363 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.685376 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.685388 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5khzc\" (UniqueName: \"kubernetes.io/projected/4e0372d5-8440-4058-964b-0d1b2023c706-kube-api-access-5khzc\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.685399 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e0372d5-8440-4058-964b-0d1b2023c706-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.716379 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-config-data" (OuterVolumeSpecName: "config-data") pod "4e0372d5-8440-4058-964b-0d1b2023c706" (UID: "4e0372d5-8440-4058-964b-0d1b2023c706"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.787630 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0372d5-8440-4058-964b-0d1b2023c706-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.841762 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e0372d5-8440-4058-964b-0d1b2023c706" containerID="e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee" exitCode=0 Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.841894 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerDied","Data":"e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee"} Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.842104 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e0372d5-8440-4058-964b-0d1b2023c706","Type":"ContainerDied","Data":"5aaa5f225e8239bfa10e02e539543e2d8f2eea4a84d67975f60b63c692d212a2"} Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.842157 4718 scope.go:117] "RemoveContainer" containerID="ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.841981 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.847074 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.861386 4718 scope.go:117] "RemoveContainer" containerID="733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.912577 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.925201 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.928352 4718 scope.go:117] "RemoveContainer" containerID="e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.929965 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.930410 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-notification-agent" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930425 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-notification-agent" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.930477 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-central-agent" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930486 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-central-agent" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.930500 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="proxy-httpd" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930508 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="proxy-httpd" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.930519 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="sg-core" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930526 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="sg-core" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930747 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="sg-core" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930776 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="proxy-httpd" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930789 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-notification-agent" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.930802 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" containerName="ceilometer-central-agent" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.933075 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.936360 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.936549 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.936657 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.938144 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.961628 4718 scope.go:117] "RemoveContainer" containerID="3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.982642 4718 scope.go:117] "RemoveContainer" containerID="ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.983107 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155\": container with ID starting with ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155 not found: ID does not exist" containerID="ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.983144 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155"} err="failed to get container status \"ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155\": rpc error: code = NotFound desc = could not find container \"ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155\": container with ID starting with ee3968074e36e46c2c03413cc84523326d8a99b7eafdc1ae4db72dc47cb59155 not found: ID does not exist" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.983194 4718 scope.go:117] "RemoveContainer" containerID="733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.983415 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240\": container with ID starting with 733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240 not found: ID does not exist" containerID="733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.983457 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240"} err="failed to get container status \"733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240\": rpc error: code = NotFound desc = could not find container \"733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240\": container with ID starting with 733c49db40cbc43ad1d3b29e241b8bc03e33d3e9fa2d3c629e33e3a140393240 not found: ID does not exist" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.983473 4718 scope.go:117] "RemoveContainer" containerID="e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.983735 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee\": container with ID starting with e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee not found: ID does not exist" containerID="e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.983753 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee"} err="failed to get container status \"e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee\": rpc error: code = NotFound desc = could not find container \"e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee\": container with ID starting with e37e5d2d63c870c665ea363696f332caa647d8f652be8b48e2bfbdf2c64769ee not found: ID does not exist" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.983782 4718 scope.go:117] "RemoveContainer" containerID="3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992" Nov 23 15:04:45 crc kubenswrapper[4718]: E1123 15:04:45.984068 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992\": container with ID starting with 3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992 not found: ID does not exist" containerID="3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.984091 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992"} err="failed to get container status \"3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992\": rpc error: code = NotFound desc = could not find container \"3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992\": container with ID starting with 3032699c600a460605924488bb745a1d27829dc8eafa420ba64f144c5f126992 not found: ID does not exist" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991460 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-scripts\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991606 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-config-data\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991658 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-run-httpd\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991724 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991746 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swvtt\" (UniqueName: \"kubernetes.io/projected/a374dce7-758d-4a17-916e-3dfbae172eaa-kube-api-access-swvtt\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-log-httpd\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:45 crc kubenswrapper[4718]: I1123 15:04:45.991998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.093645 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.093702 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-config-data\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.093732 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-run-httpd\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.093909 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.094106 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swvtt\" (UniqueName: \"kubernetes.io/projected/a374dce7-758d-4a17-916e-3dfbae172eaa-kube-api-access-swvtt\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.094190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-run-httpd\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.094781 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-log-httpd\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.094823 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.094912 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-scripts\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.095428 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-log-httpd\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.098312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.098372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.098409 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-config-data\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.098684 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.099793 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-scripts\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.113543 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swvtt\" (UniqueName: \"kubernetes.io/projected/a374dce7-758d-4a17-916e-3dfbae172eaa-kube-api-access-swvtt\") pod \"ceilometer-0\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.254700 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.452762 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0372d5-8440-4058-964b-0d1b2023c706" path="/var/lib/kubelet/pods/4e0372d5-8440-4058-964b-0d1b2023c706/volumes" Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.724686 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:46 crc kubenswrapper[4718]: W1123 15:04:46.725702 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda374dce7_758d_4a17_916e_3dfbae172eaa.slice/crio-9e1dd2ff3ab1d53be78fcddcda524071c48d31400ab82cc2d8f70927cb6aa0ba WatchSource:0}: Error finding container 9e1dd2ff3ab1d53be78fcddcda524071c48d31400ab82cc2d8f70927cb6aa0ba: Status 404 returned error can't find the container with id 9e1dd2ff3ab1d53be78fcddcda524071c48d31400ab82cc2d8f70927cb6aa0ba Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.727990 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 15:04:46 crc kubenswrapper[4718]: I1123 15:04:46.852651 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerStarted","Data":"9e1dd2ff3ab1d53be78fcddcda524071c48d31400ab82cc2d8f70927cb6aa0ba"} Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.735287 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.828232 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-config-data\") pod \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.828605 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfvq8\" (UniqueName: \"kubernetes.io/projected/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-kube-api-access-vfvq8\") pod \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.828702 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-combined-ca-bundle\") pod \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\" (UID: \"b40ce109-6c6a-4a75-8aa9-c749fa12cee3\") " Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.855357 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-kube-api-access-vfvq8" (OuterVolumeSpecName: "kube-api-access-vfvq8") pod "b40ce109-6c6a-4a75-8aa9-c749fa12cee3" (UID: "b40ce109-6c6a-4a75-8aa9-c749fa12cee3"). InnerVolumeSpecName "kube-api-access-vfvq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.879820 4718 generic.go:334] "Generic (PLEG): container finished" podID="b40ce109-6c6a-4a75-8aa9-c749fa12cee3" containerID="209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043" exitCode=137 Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.879864 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b40ce109-6c6a-4a75-8aa9-c749fa12cee3","Type":"ContainerDied","Data":"209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043"} Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.879885 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.879985 4718 scope.go:117] "RemoveContainer" containerID="209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.879969 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b40ce109-6c6a-4a75-8aa9-c749fa12cee3","Type":"ContainerDied","Data":"98a729f12bf82400e26e82e12639de82c210a6f2d9be0fbfd8542ce3aaae265a"} Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.882509 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b40ce109-6c6a-4a75-8aa9-c749fa12cee3" (UID: "b40ce109-6c6a-4a75-8aa9-c749fa12cee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.882589 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerStarted","Data":"ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03"} Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.883845 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-config-data" (OuterVolumeSpecName: "config-data") pod "b40ce109-6c6a-4a75-8aa9-c749fa12cee3" (UID: "b40ce109-6c6a-4a75-8aa9-c749fa12cee3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.912218 4718 scope.go:117] "RemoveContainer" containerID="209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043" Nov 23 15:04:47 crc kubenswrapper[4718]: E1123 15:04:47.912596 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043\": container with ID starting with 209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043 not found: ID does not exist" containerID="209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.912738 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043"} err="failed to get container status \"209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043\": rpc error: code = NotFound desc = could not find container \"209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043\": container with ID starting with 209b303c0a804d3c8a71b4ea405c7251bbb2f569886032a7ed2f8022aad2b043 not found: ID does not exist" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.933798 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.933831 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfvq8\" (UniqueName: \"kubernetes.io/projected/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-kube-api-access-vfvq8\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:47 crc kubenswrapper[4718]: I1123 15:04:47.933848 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40ce109-6c6a-4a75-8aa9-c749fa12cee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.254592 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.265193 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.276081 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:48 crc kubenswrapper[4718]: E1123 15:04:48.276795 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40ce109-6c6a-4a75-8aa9-c749fa12cee3" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.276820 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40ce109-6c6a-4a75-8aa9-c749fa12cee3" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.277051 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40ce109-6c6a-4a75-8aa9-c749fa12cee3" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.278062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.280630 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.280646 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.280673 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.285110 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.340790 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.340845 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.340887 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.340965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjs9x\" (UniqueName: \"kubernetes.io/projected/f5538cf1-7653-415e-8b15-851291e281f1-kube-api-access-xjs9x\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.341023 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.442061 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.442107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.442149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.442241 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjs9x\" (UniqueName: \"kubernetes.io/projected/f5538cf1-7653-415e-8b15-851291e281f1-kube-api-access-xjs9x\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.442297 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.446044 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.446044 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.446447 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.446893 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5538cf1-7653-415e-8b15-851291e281f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.465919 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40ce109-6c6a-4a75-8aa9-c749fa12cee3" path="/var/lib/kubelet/pods/b40ce109-6c6a-4a75-8aa9-c749fa12cee3/volumes" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.479045 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjs9x\" (UniqueName: \"kubernetes.io/projected/f5538cf1-7653-415e-8b15-851291e281f1-kube-api-access-xjs9x\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5538cf1-7653-415e-8b15-851291e281f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.601550 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.898495 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerStarted","Data":"90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55"} Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.898821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerStarted","Data":"965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432"} Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.986200 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.987702 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.987919 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 15:04:48 crc kubenswrapper[4718]: I1123 15:04:48.990695 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 15:04:49 crc kubenswrapper[4718]: I1123 15:04:49.097151 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 15:04:49 crc kubenswrapper[4718]: W1123 15:04:49.098845 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5538cf1_7653_415e_8b15_851291e281f1.slice/crio-712c3e58f266cd65e3386608e6abd078ba25149e55104b2865e910dd7ebd8766 WatchSource:0}: Error finding container 712c3e58f266cd65e3386608e6abd078ba25149e55104b2865e910dd7ebd8766: Status 404 returned error can't find the container with id 712c3e58f266cd65e3386608e6abd078ba25149e55104b2865e910dd7ebd8766 Nov 23 15:04:49 crc kubenswrapper[4718]: I1123 15:04:49.910789 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5538cf1-7653-415e-8b15-851291e281f1","Type":"ContainerStarted","Data":"d6b4ddf19dedc1a7d4f080b187dd90511bfabbc63a9e8b0c4018207355f38d8f"} Nov 23 15:04:49 crc kubenswrapper[4718]: I1123 15:04:49.911410 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5538cf1-7653-415e-8b15-851291e281f1","Type":"ContainerStarted","Data":"712c3e58f266cd65e3386608e6abd078ba25149e55104b2865e910dd7ebd8766"} Nov 23 15:04:49 crc kubenswrapper[4718]: I1123 15:04:49.911459 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 15:04:49 crc kubenswrapper[4718]: I1123 15:04:49.919423 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 15:04:49 crc kubenswrapper[4718]: I1123 15:04:49.935123 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.935097753 podStartE2EDuration="1.935097753s" podCreationTimestamp="2025-11-23 15:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:49.93022116 +0000 UTC m=+1141.169841024" watchObservedRunningTime="2025-11-23 15:04:49.935097753 +0000 UTC m=+1141.174717607" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.125858 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d6sch"] Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.127917 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.141418 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d6sch"] Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.193421 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.194605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnjbx\" (UniqueName: \"kubernetes.io/projected/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-kube-api-access-bnjbx\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.194855 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.195033 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.195107 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-config\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.195245 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.297157 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.297232 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.297253 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-config\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.297284 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.297359 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.297376 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnjbx\" (UniqueName: \"kubernetes.io/projected/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-kube-api-access-bnjbx\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.298447 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.298991 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.299912 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-config\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.300212 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.300212 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.321681 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnjbx\" (UniqueName: \"kubernetes.io/projected/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-kube-api-access-bnjbx\") pod \"dnsmasq-dns-89c5cd4d5-d6sch\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.468825 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.921817 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerStarted","Data":"caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328"} Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.922277 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 15:04:50 crc kubenswrapper[4718]: I1123 15:04:50.952241 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.708041454 podStartE2EDuration="5.952195647s" podCreationTimestamp="2025-11-23 15:04:45 +0000 UTC" firstStartedPulling="2025-11-23 15:04:46.727790331 +0000 UTC m=+1137.967410175" lastFinishedPulling="2025-11-23 15:04:49.971944524 +0000 UTC m=+1141.211564368" observedRunningTime="2025-11-23 15:04:50.945186947 +0000 UTC m=+1142.184806791" watchObservedRunningTime="2025-11-23 15:04:50.952195647 +0000 UTC m=+1142.191815491" Nov 23 15:04:51 crc kubenswrapper[4718]: I1123 15:04:51.006742 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d6sch"] Nov 23 15:04:51 crc kubenswrapper[4718]: W1123 15:04:51.012650 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2c1c2b4_4f68_4ed1_bcc8_72316c595aa0.slice/crio-76813e2645ffad0c212c1b603d38b21de14c98215d90c1cd02eb48327b02d7ae WatchSource:0}: Error finding container 76813e2645ffad0c212c1b603d38b21de14c98215d90c1cd02eb48327b02d7ae: Status 404 returned error can't find the container with id 76813e2645ffad0c212c1b603d38b21de14c98215d90c1cd02eb48327b02d7ae Nov 23 15:04:51 crc kubenswrapper[4718]: I1123 15:04:51.932078 4718 generic.go:334] "Generic (PLEG): container finished" podID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerID="3421c273d37354af91c53eb922676c9429576369efc54c6253c5d68158fdaede" exitCode=0 Nov 23 15:04:51 crc kubenswrapper[4718]: I1123 15:04:51.932173 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" event={"ID":"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0","Type":"ContainerDied","Data":"3421c273d37354af91c53eb922676c9429576369efc54c6253c5d68158fdaede"} Nov 23 15:04:51 crc kubenswrapper[4718]: I1123 15:04:51.933293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" event={"ID":"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0","Type":"ContainerStarted","Data":"76813e2645ffad0c212c1b603d38b21de14c98215d90c1cd02eb48327b02d7ae"} Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.141649 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.432140 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.912225 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.982571 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" event={"ID":"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0","Type":"ContainerStarted","Data":"a8ad028a18e0e029b6893489d3bc268613d206107ad47660b2c1aa363754e251"} Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.982788 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-central-agent" containerID="cri-o://ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03" gracePeriod=30 Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.982926 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.983296 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="sg-core" containerID="cri-o://90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55" gracePeriod=30 Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.983341 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="proxy-httpd" containerID="cri-o://caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328" gracePeriod=30 Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.983373 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-notification-agent" containerID="cri-o://965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432" gracePeriod=30 Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.984665 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-log" containerID="cri-o://d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f" gracePeriod=30 Nov 23 15:04:52 crc kubenswrapper[4718]: I1123 15:04:52.984705 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-api" containerID="cri-o://f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28" gracePeriod=30 Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.013513 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" podStartSLOduration=3.013493843 podStartE2EDuration="3.013493843s" podCreationTimestamp="2025-11-23 15:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:53.010200684 +0000 UTC m=+1144.249820528" watchObservedRunningTime="2025-11-23 15:04:53.013493843 +0000 UTC m=+1144.253113697" Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.053406 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.053500 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.053555 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.054355 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71577aa824008968487c33bc21787ce3eb07e3b6f70d6cf5aac37a6881128f0a"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.054418 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://71577aa824008968487c33bc21787ce3eb07e3b6f70d6cf5aac37a6881128f0a" gracePeriod=600 Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.601731 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.998867 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="71577aa824008968487c33bc21787ce3eb07e3b6f70d6cf5aac37a6881128f0a" exitCode=0 Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.999234 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"71577aa824008968487c33bc21787ce3eb07e3b6f70d6cf5aac37a6881128f0a"} Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.999288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"1501bec9fe9ec98aacf1e278e6a359530da30674903e2cad276cc832e866bc17"} Nov 23 15:04:53 crc kubenswrapper[4718]: I1123 15:04:53.999309 4718 scope.go:117] "RemoveContainer" containerID="99f9fac909b59d4f75ad1a92599badf9d1f5ce639b6a71381d289bfc1cc670ef" Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.005539 4718 generic.go:334] "Generic (PLEG): container finished" podID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerID="d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f" exitCode=143 Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.005666 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"370e8e20-e2e6-4428-8fb9-0e03617d8d97","Type":"ContainerDied","Data":"d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f"} Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.009871 4718 generic.go:334] "Generic (PLEG): container finished" podID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerID="caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328" exitCode=0 Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.009914 4718 generic.go:334] "Generic (PLEG): container finished" podID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerID="90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55" exitCode=2 Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.009924 4718 generic.go:334] "Generic (PLEG): container finished" podID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerID="965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432" exitCode=0 Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.010874 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerDied","Data":"caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328"} Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.010906 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerDied","Data":"90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55"} Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.010933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerDied","Data":"965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432"} Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.826634 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.985533 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-config-data\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.985890 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-combined-ca-bundle\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986034 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-scripts\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986079 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-run-httpd\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986212 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-log-httpd\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986292 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-sg-core-conf-yaml\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986341 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swvtt\" (UniqueName: \"kubernetes.io/projected/a374dce7-758d-4a17-916e-3dfbae172eaa-kube-api-access-swvtt\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986377 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-ceilometer-tls-certs\") pod \"a374dce7-758d-4a17-916e-3dfbae172eaa\" (UID: \"a374dce7-758d-4a17-916e-3dfbae172eaa\") " Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.986921 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.987353 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.987371 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a374dce7-758d-4a17-916e-3dfbae172eaa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:54 crc kubenswrapper[4718]: I1123 15:04:54.992689 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a374dce7-758d-4a17-916e-3dfbae172eaa-kube-api-access-swvtt" (OuterVolumeSpecName: "kube-api-access-swvtt") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "kube-api-access-swvtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.001552 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-scripts" (OuterVolumeSpecName: "scripts") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.025999 4718 generic.go:334] "Generic (PLEG): container finished" podID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerID="ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03" exitCode=0 Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.026056 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerDied","Data":"ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03"} Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.026092 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a374dce7-758d-4a17-916e-3dfbae172eaa","Type":"ContainerDied","Data":"9e1dd2ff3ab1d53be78fcddcda524071c48d31400ab82cc2d8f70927cb6aa0ba"} Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.026119 4718 scope.go:117] "RemoveContainer" containerID="caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.026282 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.036670 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.051933 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.081869 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.089646 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.089685 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.089727 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.089743 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swvtt\" (UniqueName: \"kubernetes.io/projected/a374dce7-758d-4a17-916e-3dfbae172eaa-kube-api-access-swvtt\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.089757 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.097970 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-config-data" (OuterVolumeSpecName: "config-data") pod "a374dce7-758d-4a17-916e-3dfbae172eaa" (UID: "a374dce7-758d-4a17-916e-3dfbae172eaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.134187 4718 scope.go:117] "RemoveContainer" containerID="90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.175275 4718 scope.go:117] "RemoveContainer" containerID="965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.192138 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a374dce7-758d-4a17-916e-3dfbae172eaa-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.212481 4718 scope.go:117] "RemoveContainer" containerID="ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.252401 4718 scope.go:117] "RemoveContainer" containerID="caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328" Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.252952 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328\": container with ID starting with caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328 not found: ID does not exist" containerID="caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.253007 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328"} err="failed to get container status \"caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328\": rpc error: code = NotFound desc = could not find container \"caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328\": container with ID starting with caed5e12429f8fcfec336917f1f3f6d267ef27e852496ab7575808d8a6d4e328 not found: ID does not exist" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.253040 4718 scope.go:117] "RemoveContainer" containerID="90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55" Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.253575 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55\": container with ID starting with 90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55 not found: ID does not exist" containerID="90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.253607 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55"} err="failed to get container status \"90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55\": rpc error: code = NotFound desc = could not find container \"90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55\": container with ID starting with 90fc296da56997d82fc7cff9f65d3b1f21557669815c570f944e7d10b8656f55 not found: ID does not exist" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.253642 4718 scope.go:117] "RemoveContainer" containerID="965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432" Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.254015 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432\": container with ID starting with 965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432 not found: ID does not exist" containerID="965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.254042 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432"} err="failed to get container status \"965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432\": rpc error: code = NotFound desc = could not find container \"965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432\": container with ID starting with 965d0d774d091707912708ff9eb86be322405ca224b569ebb316f28c63051432 not found: ID does not exist" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.254060 4718 scope.go:117] "RemoveContainer" containerID="ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03" Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.254299 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03\": container with ID starting with ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03 not found: ID does not exist" containerID="ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.254327 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03"} err="failed to get container status \"ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03\": rpc error: code = NotFound desc = could not find container \"ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03\": container with ID starting with ff8c57efda1c42b6e974f613633b345807b55329495cbccc21fe37ed628fda03 not found: ID does not exist" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.368660 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.379789 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.390137 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.390628 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="sg-core" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.390656 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="sg-core" Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.390688 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="proxy-httpd" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.390700 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="proxy-httpd" Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.390736 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-notification-agent" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.390748 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-notification-agent" Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.390785 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-central-agent" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.390795 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-central-agent" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.391066 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="sg-core" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.391104 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-notification-agent" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.391128 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="ceilometer-central-agent" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.391147 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" containerName="proxy-httpd" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.393185 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.397177 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.397337 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.397550 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.406974 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:55 crc kubenswrapper[4718]: E1123 15:04:55.446637 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda374dce7_758d_4a17_916e_3dfbae172eaa.slice/crio-9e1dd2ff3ab1d53be78fcddcda524071c48d31400ab82cc2d8f70927cb6aa0ba\": RecentStats: unable to find data in memory cache]" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496490 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbql7\" (UniqueName: \"kubernetes.io/projected/786c1d0e-1895-4b3f-a95e-537692e9685d-kube-api-access-lbql7\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496726 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496788 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786c1d0e-1895-4b3f-a95e-537692e9685d-log-httpd\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496809 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786c1d0e-1895-4b3f-a95e-537692e9685d-run-httpd\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496865 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-config-data\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-scripts\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.496975 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598326 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598394 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786c1d0e-1895-4b3f-a95e-537692e9685d-log-httpd\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786c1d0e-1895-4b3f-a95e-537692e9685d-run-httpd\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598459 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598509 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-config-data\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598577 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-scripts\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598609 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.598644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbql7\" (UniqueName: \"kubernetes.io/projected/786c1d0e-1895-4b3f-a95e-537692e9685d-kube-api-access-lbql7\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.599567 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786c1d0e-1895-4b3f-a95e-537692e9685d-log-httpd\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.599602 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786c1d0e-1895-4b3f-a95e-537692e9685d-run-httpd\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.603179 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.603723 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-scripts\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.603999 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.604088 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.606162 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786c1d0e-1895-4b3f-a95e-537692e9685d-config-data\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.618673 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbql7\" (UniqueName: \"kubernetes.io/projected/786c1d0e-1895-4b3f-a95e-537692e9685d-kube-api-access-lbql7\") pod \"ceilometer-0\" (UID: \"786c1d0e-1895-4b3f-a95e-537692e9685d\") " pod="openstack/ceilometer-0" Nov 23 15:04:55 crc kubenswrapper[4718]: I1123 15:04:55.765268 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 15:04:56 crc kubenswrapper[4718]: W1123 15:04:56.254000 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod786c1d0e_1895_4b3f_a95e_537692e9685d.slice/crio-73d7920cde934b5361973ffa2e3d70ceb74f5bb9dd4d521ee313346daff41e44 WatchSource:0}: Error finding container 73d7920cde934b5361973ffa2e3d70ceb74f5bb9dd4d521ee313346daff41e44: Status 404 returned error can't find the container with id 73d7920cde934b5361973ffa2e3d70ceb74f5bb9dd4d521ee313346daff41e44 Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.254865 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.455574 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a374dce7-758d-4a17-916e-3dfbae172eaa" path="/var/lib/kubelet/pods/a374dce7-758d-4a17-916e-3dfbae172eaa/volumes" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.522335 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.729996 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mf4\" (UniqueName: \"kubernetes.io/projected/370e8e20-e2e6-4428-8fb9-0e03617d8d97-kube-api-access-t5mf4\") pod \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.730267 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-config-data\") pod \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.730321 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370e8e20-e2e6-4428-8fb9-0e03617d8d97-logs\") pod \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.730350 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-combined-ca-bundle\") pod \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\" (UID: \"370e8e20-e2e6-4428-8fb9-0e03617d8d97\") " Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.732277 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/370e8e20-e2e6-4428-8fb9-0e03617d8d97-logs" (OuterVolumeSpecName: "logs") pod "370e8e20-e2e6-4428-8fb9-0e03617d8d97" (UID: "370e8e20-e2e6-4428-8fb9-0e03617d8d97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.756548 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370e8e20-e2e6-4428-8fb9-0e03617d8d97-kube-api-access-t5mf4" (OuterVolumeSpecName: "kube-api-access-t5mf4") pod "370e8e20-e2e6-4428-8fb9-0e03617d8d97" (UID: "370e8e20-e2e6-4428-8fb9-0e03617d8d97"). InnerVolumeSpecName "kube-api-access-t5mf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.767247 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "370e8e20-e2e6-4428-8fb9-0e03617d8d97" (UID: "370e8e20-e2e6-4428-8fb9-0e03617d8d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.831827 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mf4\" (UniqueName: \"kubernetes.io/projected/370e8e20-e2e6-4428-8fb9-0e03617d8d97-kube-api-access-t5mf4\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.831853 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370e8e20-e2e6-4428-8fb9-0e03617d8d97-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.831863 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.832883 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-config-data" (OuterVolumeSpecName: "config-data") pod "370e8e20-e2e6-4428-8fb9-0e03617d8d97" (UID: "370e8e20-e2e6-4428-8fb9-0e03617d8d97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:04:56 crc kubenswrapper[4718]: I1123 15:04:56.933694 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370e8e20-e2e6-4428-8fb9-0e03617d8d97-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.044832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"786c1d0e-1895-4b3f-a95e-537692e9685d","Type":"ContainerStarted","Data":"7196f41892ff2dfdc6123cbc2a0e02050114a501f000c1267b9caf81f65fde95"} Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.044871 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"786c1d0e-1895-4b3f-a95e-537692e9685d","Type":"ContainerStarted","Data":"73d7920cde934b5361973ffa2e3d70ceb74f5bb9dd4d521ee313346daff41e44"} Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.046952 4718 generic.go:334] "Generic (PLEG): container finished" podID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerID="f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28" exitCode=0 Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.046979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"370e8e20-e2e6-4428-8fb9-0e03617d8d97","Type":"ContainerDied","Data":"f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28"} Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.047004 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"370e8e20-e2e6-4428-8fb9-0e03617d8d97","Type":"ContainerDied","Data":"110b1034d590373d1eccb5f9c09f2ff98399d34a5396e8c6adc0f5ef7bed483f"} Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.047021 4718 scope.go:117] "RemoveContainer" containerID="f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.047033 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.068719 4718 scope.go:117] "RemoveContainer" containerID="d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.088789 4718 scope.go:117] "RemoveContainer" containerID="f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28" Nov 23 15:04:57 crc kubenswrapper[4718]: E1123 15:04:57.089205 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28\": container with ID starting with f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28 not found: ID does not exist" containerID="f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.089236 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28"} err="failed to get container status \"f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28\": rpc error: code = NotFound desc = could not find container \"f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28\": container with ID starting with f0fb36810c087132620971ccb44f0d6a31d4b6345889a2319f8e70320debcf28 not found: ID does not exist" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.089258 4718 scope.go:117] "RemoveContainer" containerID="d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f" Nov 23 15:04:57 crc kubenswrapper[4718]: E1123 15:04:57.089487 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f\": container with ID starting with d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f not found: ID does not exist" containerID="d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.089508 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f"} err="failed to get container status \"d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f\": rpc error: code = NotFound desc = could not find container \"d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f\": container with ID starting with d9f36f11e0f29e7fd4fb09e0f253d62324c3f4dbde5af8106a36add18ca8f85f not found: ID does not exist" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.091763 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.099732 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.120537 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:57 crc kubenswrapper[4718]: E1123 15:04:57.120887 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-api" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.120904 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-api" Nov 23 15:04:57 crc kubenswrapper[4718]: E1123 15:04:57.120926 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-log" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.120933 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-log" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.121105 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-api" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.121134 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" containerName="nova-api-log" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.122265 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.124140 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.130259 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.130267 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.132660 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.140645 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-public-tls-certs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.140699 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.140799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.140827 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c23bb7-bf1c-4139-be89-8238bb660723-logs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.140877 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-config-data\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.140915 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsbv\" (UniqueName: \"kubernetes.io/projected/87c23bb7-bf1c-4139-be89-8238bb660723-kube-api-access-pcsbv\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.244037 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.244102 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c23bb7-bf1c-4139-be89-8238bb660723-logs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.244164 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-config-data\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.244212 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsbv\" (UniqueName: \"kubernetes.io/projected/87c23bb7-bf1c-4139-be89-8238bb660723-kube-api-access-pcsbv\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.244317 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-public-tls-certs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.244359 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.244707 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c23bb7-bf1c-4139-be89-8238bb660723-logs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.250177 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.250938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-public-tls-certs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.251731 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.257778 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-config-data\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.265476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsbv\" (UniqueName: \"kubernetes.io/projected/87c23bb7-bf1c-4139-be89-8238bb660723-kube-api-access-pcsbv\") pod \"nova-api-0\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.441291 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:04:57 crc kubenswrapper[4718]: I1123 15:04:57.945857 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:04:57 crc kubenswrapper[4718]: W1123 15:04:57.948319 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87c23bb7_bf1c_4139_be89_8238bb660723.slice/crio-818fb43a47830d9f9711db83fd461043dcee1b04ac34c050d02468f8ba8c475d WatchSource:0}: Error finding container 818fb43a47830d9f9711db83fd461043dcee1b04ac34c050d02468f8ba8c475d: Status 404 returned error can't find the container with id 818fb43a47830d9f9711db83fd461043dcee1b04ac34c050d02468f8ba8c475d Nov 23 15:04:58 crc kubenswrapper[4718]: I1123 15:04:58.059029 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"786c1d0e-1895-4b3f-a95e-537692e9685d","Type":"ContainerStarted","Data":"851a3b8b5085436ef7b923c5784ca6dc491db2d108041d86e9a4de45ea294f5f"} Nov 23 15:04:58 crc kubenswrapper[4718]: I1123 15:04:58.060259 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87c23bb7-bf1c-4139-be89-8238bb660723","Type":"ContainerStarted","Data":"818fb43a47830d9f9711db83fd461043dcee1b04ac34c050d02468f8ba8c475d"} Nov 23 15:04:58 crc kubenswrapper[4718]: I1123 15:04:58.454162 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370e8e20-e2e6-4428-8fb9-0e03617d8d97" path="/var/lib/kubelet/pods/370e8e20-e2e6-4428-8fb9-0e03617d8d97/volumes" Nov 23 15:04:58 crc kubenswrapper[4718]: I1123 15:04:58.602865 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:58 crc kubenswrapper[4718]: I1123 15:04:58.624808 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.079785 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87c23bb7-bf1c-4139-be89-8238bb660723","Type":"ContainerStarted","Data":"13c73a4e15a892fee455490e64390494ebd1e32453b7c1bd2b57a206a66e8d87"} Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.079831 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87c23bb7-bf1c-4139-be89-8238bb660723","Type":"ContainerStarted","Data":"e96928f4ed914a5bfdfd9976e1731c2784f38d27a9de25ebaaeecfa9fdb253a6"} Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.085064 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"786c1d0e-1895-4b3f-a95e-537692e9685d","Type":"ContainerStarted","Data":"cdd9a4fb4fb899b83bedaf030f2f89a32767f8651d678aaac16a868d56f24ebb"} Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.111979 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.112479 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.112469683 podStartE2EDuration="2.112469683s" podCreationTimestamp="2025-11-23 15:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:04:59.105426301 +0000 UTC m=+1150.345046165" watchObservedRunningTime="2025-11-23 15:04:59.112469683 +0000 UTC m=+1150.352089527" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.260129 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mlqkj"] Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.261853 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.269576 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.270531 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.290794 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mlqkj"] Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.301197 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.301281 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8tc\" (UniqueName: \"kubernetes.io/projected/68623fd2-db0d-4196-8cfa-59411b9d7f68-kube-api-access-dn8tc\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.301330 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-scripts\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.301353 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-config-data\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.402519 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-scripts\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.403669 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-config-data\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.403748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.403843 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8tc\" (UniqueName: \"kubernetes.io/projected/68623fd2-db0d-4196-8cfa-59411b9d7f68-kube-api-access-dn8tc\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.414230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.419567 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8tc\" (UniqueName: \"kubernetes.io/projected/68623fd2-db0d-4196-8cfa-59411b9d7f68-kube-api-access-dn8tc\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.420033 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-scripts\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.422620 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-config-data\") pod \"nova-cell1-cell-mapping-mlqkj\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:04:59 crc kubenswrapper[4718]: I1123 15:04:59.590763 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:05:00 crc kubenswrapper[4718]: W1123 15:05:00.080814 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68623fd2_db0d_4196_8cfa_59411b9d7f68.slice/crio-fd7b62572b17fbe7f46e3c10a841842bd470e72897ccbe3690e1c472c9be6113 WatchSource:0}: Error finding container fd7b62572b17fbe7f46e3c10a841842bd470e72897ccbe3690e1c472c9be6113: Status 404 returned error can't find the container with id fd7b62572b17fbe7f46e3c10a841842bd470e72897ccbe3690e1c472c9be6113 Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.082359 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mlqkj"] Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.096158 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"786c1d0e-1895-4b3f-a95e-537692e9685d","Type":"ContainerStarted","Data":"965be5d7dff9c3f0a004e7df524609a13d25502b207c524d5ad8cddb48116189"} Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.096258 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.097390 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mlqkj" event={"ID":"68623fd2-db0d-4196-8cfa-59411b9d7f68","Type":"ContainerStarted","Data":"fd7b62572b17fbe7f46e3c10a841842bd470e72897ccbe3690e1c472c9be6113"} Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.133688 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6897029 podStartE2EDuration="5.133668099s" podCreationTimestamp="2025-11-23 15:04:55 +0000 UTC" firstStartedPulling="2025-11-23 15:04:56.256245937 +0000 UTC m=+1147.495865781" lastFinishedPulling="2025-11-23 15:04:59.700211136 +0000 UTC m=+1150.939830980" observedRunningTime="2025-11-23 15:05:00.128661493 +0000 UTC m=+1151.368281337" watchObservedRunningTime="2025-11-23 15:05:00.133668099 +0000 UTC m=+1151.373287953" Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.470602 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.527376 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5pqkd"] Nov 23 15:05:00 crc kubenswrapper[4718]: I1123 15:05:00.527918 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" podUID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerName="dnsmasq-dns" containerID="cri-o://5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae" gracePeriod=10 Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.074004 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.105780 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mlqkj" event={"ID":"68623fd2-db0d-4196-8cfa-59411b9d7f68","Type":"ContainerStarted","Data":"80b54743a44e3a9e422fc9ff38d08a7e6beb306b10cde640a5001be11292ef87"} Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.109085 4718 generic.go:334] "Generic (PLEG): container finished" podID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerID="5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae" exitCode=0 Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.109678 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.109821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" event={"ID":"50b65a80-4058-4be6-b1c6-79e1fd2e081f","Type":"ContainerDied","Data":"5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae"} Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.109843 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5pqkd" event={"ID":"50b65a80-4058-4be6-b1c6-79e1fd2e081f","Type":"ContainerDied","Data":"5e540d50af039e291dab2c4543f9e11c3a4f3c1a9bc1e8268c115d9a3973511b"} Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.109858 4718 scope.go:117] "RemoveContainer" containerID="5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.132539 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mlqkj" podStartSLOduration=2.132519338 podStartE2EDuration="2.132519338s" podCreationTimestamp="2025-11-23 15:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:05:01.131238453 +0000 UTC m=+1152.370858297" watchObservedRunningTime="2025-11-23 15:05:01.132519338 +0000 UTC m=+1152.372139182" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.144600 4718 scope.go:117] "RemoveContainer" containerID="5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.151985 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-svc\") pod \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.152075 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-swift-storage-0\") pod \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.152173 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wlqm\" (UniqueName: \"kubernetes.io/projected/50b65a80-4058-4be6-b1c6-79e1fd2e081f-kube-api-access-2wlqm\") pod \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.152239 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-config\") pod \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.152315 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-sb\") pod \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.152340 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-nb\") pod \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\" (UID: \"50b65a80-4058-4be6-b1c6-79e1fd2e081f\") " Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.178749 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b65a80-4058-4be6-b1c6-79e1fd2e081f-kube-api-access-2wlqm" (OuterVolumeSpecName: "kube-api-access-2wlqm") pod "50b65a80-4058-4be6-b1c6-79e1fd2e081f" (UID: "50b65a80-4058-4be6-b1c6-79e1fd2e081f"). InnerVolumeSpecName "kube-api-access-2wlqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.227655 4718 scope.go:117] "RemoveContainer" containerID="5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae" Nov 23 15:05:01 crc kubenswrapper[4718]: E1123 15:05:01.249629 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae\": container with ID starting with 5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae not found: ID does not exist" containerID="5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.249671 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae"} err="failed to get container status \"5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae\": rpc error: code = NotFound desc = could not find container \"5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae\": container with ID starting with 5d782cd7a874accb6bb0e23605b84bd713158f472e496b4164f410e33e96ecae not found: ID does not exist" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.249696 4718 scope.go:117] "RemoveContainer" containerID="5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868" Nov 23 15:05:01 crc kubenswrapper[4718]: E1123 15:05:01.251645 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868\": container with ID starting with 5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868 not found: ID does not exist" containerID="5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.251694 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868"} err="failed to get container status \"5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868\": rpc error: code = NotFound desc = could not find container \"5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868\": container with ID starting with 5a70f425fc62a46762ace64c97718e5ab88fd0ea0593fbafd8fdc5a85f20f868 not found: ID does not exist" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.272095 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wlqm\" (UniqueName: \"kubernetes.io/projected/50b65a80-4058-4be6-b1c6-79e1fd2e081f-kube-api-access-2wlqm\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.309338 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50b65a80-4058-4be6-b1c6-79e1fd2e081f" (UID: "50b65a80-4058-4be6-b1c6-79e1fd2e081f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.316025 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50b65a80-4058-4be6-b1c6-79e1fd2e081f" (UID: "50b65a80-4058-4be6-b1c6-79e1fd2e081f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.329734 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50b65a80-4058-4be6-b1c6-79e1fd2e081f" (UID: "50b65a80-4058-4be6-b1c6-79e1fd2e081f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.332363 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "50b65a80-4058-4be6-b1c6-79e1fd2e081f" (UID: "50b65a80-4058-4be6-b1c6-79e1fd2e081f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.332610 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-config" (OuterVolumeSpecName: "config") pod "50b65a80-4058-4be6-b1c6-79e1fd2e081f" (UID: "50b65a80-4058-4be6-b1c6-79e1fd2e081f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.375576 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.375608 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.375619 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.375627 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.375637 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50b65a80-4058-4be6-b1c6-79e1fd2e081f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.486235 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5pqkd"] Nov 23 15:05:01 crc kubenswrapper[4718]: I1123 15:05:01.495023 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5pqkd"] Nov 23 15:05:02 crc kubenswrapper[4718]: I1123 15:05:02.452038 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" path="/var/lib/kubelet/pods/50b65a80-4058-4be6-b1c6-79e1fd2e081f/volumes" Nov 23 15:05:06 crc kubenswrapper[4718]: I1123 15:05:06.167150 4718 generic.go:334] "Generic (PLEG): container finished" podID="68623fd2-db0d-4196-8cfa-59411b9d7f68" containerID="80b54743a44e3a9e422fc9ff38d08a7e6beb306b10cde640a5001be11292ef87" exitCode=0 Nov 23 15:05:06 crc kubenswrapper[4718]: I1123 15:05:06.167919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mlqkj" event={"ID":"68623fd2-db0d-4196-8cfa-59411b9d7f68","Type":"ContainerDied","Data":"80b54743a44e3a9e422fc9ff38d08a7e6beb306b10cde640a5001be11292ef87"} Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.442013 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.442286 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.621832 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.706453 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-config-data\") pod \"68623fd2-db0d-4196-8cfa-59411b9d7f68\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.706538 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn8tc\" (UniqueName: \"kubernetes.io/projected/68623fd2-db0d-4196-8cfa-59411b9d7f68-kube-api-access-dn8tc\") pod \"68623fd2-db0d-4196-8cfa-59411b9d7f68\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.706594 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-scripts\") pod \"68623fd2-db0d-4196-8cfa-59411b9d7f68\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.706792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-combined-ca-bundle\") pod \"68623fd2-db0d-4196-8cfa-59411b9d7f68\" (UID: \"68623fd2-db0d-4196-8cfa-59411b9d7f68\") " Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.712303 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68623fd2-db0d-4196-8cfa-59411b9d7f68-kube-api-access-dn8tc" (OuterVolumeSpecName: "kube-api-access-dn8tc") pod "68623fd2-db0d-4196-8cfa-59411b9d7f68" (UID: "68623fd2-db0d-4196-8cfa-59411b9d7f68"). InnerVolumeSpecName "kube-api-access-dn8tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.713496 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-scripts" (OuterVolumeSpecName: "scripts") pod "68623fd2-db0d-4196-8cfa-59411b9d7f68" (UID: "68623fd2-db0d-4196-8cfa-59411b9d7f68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.735761 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-config-data" (OuterVolumeSpecName: "config-data") pod "68623fd2-db0d-4196-8cfa-59411b9d7f68" (UID: "68623fd2-db0d-4196-8cfa-59411b9d7f68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.749668 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68623fd2-db0d-4196-8cfa-59411b9d7f68" (UID: "68623fd2-db0d-4196-8cfa-59411b9d7f68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.808525 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.808564 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn8tc\" (UniqueName: \"kubernetes.io/projected/68623fd2-db0d-4196-8cfa-59411b9d7f68-kube-api-access-dn8tc\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.808579 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:07 crc kubenswrapper[4718]: I1123 15:05:07.808603 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68623fd2-db0d-4196-8cfa-59411b9d7f68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.189310 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mlqkj" event={"ID":"68623fd2-db0d-4196-8cfa-59411b9d7f68","Type":"ContainerDied","Data":"fd7b62572b17fbe7f46e3c10a841842bd470e72897ccbe3690e1c472c9be6113"} Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.189400 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd7b62572b17fbe7f46e3c10a841842bd470e72897ccbe3690e1c472c9be6113" Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.189500 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mlqkj" Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.377592 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.377828 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9f08df78-9b3f-42cb-811f-468c669c59f2" containerName="nova-scheduler-scheduler" containerID="cri-o://7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08" gracePeriod=30 Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.388886 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.389170 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-log" containerID="cri-o://e96928f4ed914a5bfdfd9976e1731c2784f38d27a9de25ebaaeecfa9fdb253a6" gracePeriod=30 Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.389276 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-api" containerID="cri-o://13c73a4e15a892fee455490e64390494ebd1e32453b7c1bd2b57a206a66e8d87" gracePeriod=30 Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.395255 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.397642 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.457549 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.457843 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-log" containerID="cri-o://aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc" gracePeriod=30 Nov 23 15:05:08 crc kubenswrapper[4718]: I1123 15:05:08.457961 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-metadata" containerID="cri-o://6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b" gracePeriod=30 Nov 23 15:05:08 crc kubenswrapper[4718]: E1123 15:05:08.956903 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 15:05:08 crc kubenswrapper[4718]: E1123 15:05:08.958812 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 15:05:08 crc kubenswrapper[4718]: E1123 15:05:08.960298 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 15:05:08 crc kubenswrapper[4718]: E1123 15:05:08.960378 4718 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9f08df78-9b3f-42cb-811f-468c669c59f2" containerName="nova-scheduler-scheduler" Nov 23 15:05:09 crc kubenswrapper[4718]: I1123 15:05:09.199602 4718 generic.go:334] "Generic (PLEG): container finished" podID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerID="aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc" exitCode=143 Nov 23 15:05:09 crc kubenswrapper[4718]: I1123 15:05:09.199685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58a07cc4-616d-45d0-a935-1a00d23f3d71","Type":"ContainerDied","Data":"aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc"} Nov 23 15:05:09 crc kubenswrapper[4718]: I1123 15:05:09.201754 4718 generic.go:334] "Generic (PLEG): container finished" podID="87c23bb7-bf1c-4139-be89-8238bb660723" containerID="e96928f4ed914a5bfdfd9976e1731c2784f38d27a9de25ebaaeecfa9fdb253a6" exitCode=143 Nov 23 15:05:09 crc kubenswrapper[4718]: I1123 15:05:09.201779 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87c23bb7-bf1c-4139-be89-8238bb660723","Type":"ContainerDied","Data":"e96928f4ed914a5bfdfd9976e1731c2784f38d27a9de25ebaaeecfa9fdb253a6"} Nov 23 15:05:11 crc kubenswrapper[4718]: I1123 15:05:11.589959 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:46180->10.217.0.192:8775: read: connection reset by peer" Nov 23 15:05:11 crc kubenswrapper[4718]: I1123 15:05:11.590511 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:46178->10.217.0.192:8775: read: connection reset by peer" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.134436 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.237200 4718 generic.go:334] "Generic (PLEG): container finished" podID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerID="6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b" exitCode=0 Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.237246 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58a07cc4-616d-45d0-a935-1a00d23f3d71","Type":"ContainerDied","Data":"6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b"} Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.237276 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58a07cc4-616d-45d0-a935-1a00d23f3d71","Type":"ContainerDied","Data":"d8f6bc69bf448c9acedac811258d5a833be20f452d390462b575baba8b362a14"} Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.237286 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.237296 4718 scope.go:117] "RemoveContainer" containerID="6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.273835 4718 scope.go:117] "RemoveContainer" containerID="aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.293104 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-nova-metadata-tls-certs\") pod \"58a07cc4-616d-45d0-a935-1a00d23f3d71\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.293312 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkrqp\" (UniqueName: \"kubernetes.io/projected/58a07cc4-616d-45d0-a935-1a00d23f3d71-kube-api-access-kkrqp\") pod \"58a07cc4-616d-45d0-a935-1a00d23f3d71\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.293343 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58a07cc4-616d-45d0-a935-1a00d23f3d71-logs\") pod \"58a07cc4-616d-45d0-a935-1a00d23f3d71\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.293381 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-combined-ca-bundle\") pod \"58a07cc4-616d-45d0-a935-1a00d23f3d71\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.293410 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-config-data\") pod \"58a07cc4-616d-45d0-a935-1a00d23f3d71\" (UID: \"58a07cc4-616d-45d0-a935-1a00d23f3d71\") " Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.296126 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58a07cc4-616d-45d0-a935-1a00d23f3d71-logs" (OuterVolumeSpecName: "logs") pod "58a07cc4-616d-45d0-a935-1a00d23f3d71" (UID: "58a07cc4-616d-45d0-a935-1a00d23f3d71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.318692 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a07cc4-616d-45d0-a935-1a00d23f3d71-kube-api-access-kkrqp" (OuterVolumeSpecName: "kube-api-access-kkrqp") pod "58a07cc4-616d-45d0-a935-1a00d23f3d71" (UID: "58a07cc4-616d-45d0-a935-1a00d23f3d71"). InnerVolumeSpecName "kube-api-access-kkrqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.331148 4718 scope.go:117] "RemoveContainer" containerID="6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b" Nov 23 15:05:12 crc kubenswrapper[4718]: E1123 15:05:12.331555 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b\": container with ID starting with 6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b not found: ID does not exist" containerID="6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.331588 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b"} err="failed to get container status \"6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b\": rpc error: code = NotFound desc = could not find container \"6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b\": container with ID starting with 6a4987d27f920d52f75939c3d09d6e9d39d6c3dbcceefc44b0ad42e13d0cfb1b not found: ID does not exist" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.331608 4718 scope.go:117] "RemoveContainer" containerID="aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc" Nov 23 15:05:12 crc kubenswrapper[4718]: E1123 15:05:12.331830 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc\": container with ID starting with aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc not found: ID does not exist" containerID="aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.331850 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc"} err="failed to get container status \"aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc\": rpc error: code = NotFound desc = could not find container \"aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc\": container with ID starting with aa60533c87b3001ab3fe5f9b05fdf5c49d0157fa170bfa7ad23e333c2e4cd3cc not found: ID does not exist" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.346923 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58a07cc4-616d-45d0-a935-1a00d23f3d71" (UID: "58a07cc4-616d-45d0-a935-1a00d23f3d71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.351567 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-config-data" (OuterVolumeSpecName: "config-data") pod "58a07cc4-616d-45d0-a935-1a00d23f3d71" (UID: "58a07cc4-616d-45d0-a935-1a00d23f3d71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.376001 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "58a07cc4-616d-45d0-a935-1a00d23f3d71" (UID: "58a07cc4-616d-45d0-a935-1a00d23f3d71"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.395327 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkrqp\" (UniqueName: \"kubernetes.io/projected/58a07cc4-616d-45d0-a935-1a00d23f3d71-kube-api-access-kkrqp\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.395369 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58a07cc4-616d-45d0-a935-1a00d23f3d71-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.395382 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.395395 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.395406 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a07cc4-616d-45d0-a935-1a00d23f3d71-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.565309 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.572760 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.582399 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:05:12 crc kubenswrapper[4718]: E1123 15:05:12.582958 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-metadata" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.583215 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-metadata" Nov 23 15:05:12 crc kubenswrapper[4718]: E1123 15:05:12.583315 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68623fd2-db0d-4196-8cfa-59411b9d7f68" containerName="nova-manage" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.583368 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="68623fd2-db0d-4196-8cfa-59411b9d7f68" containerName="nova-manage" Nov 23 15:05:12 crc kubenswrapper[4718]: E1123 15:05:12.583449 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-log" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.583505 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-log" Nov 23 15:05:12 crc kubenswrapper[4718]: E1123 15:05:12.583687 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerName="dnsmasq-dns" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.583770 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerName="dnsmasq-dns" Nov 23 15:05:12 crc kubenswrapper[4718]: E1123 15:05:12.583837 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerName="init" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.583903 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerName="init" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.584147 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-log" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.584227 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="68623fd2-db0d-4196-8cfa-59411b9d7f68" containerName="nova-manage" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.584312 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b65a80-4058-4be6-b1c6-79e1fd2e081f" containerName="dnsmasq-dns" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.584383 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" containerName="nova-metadata-metadata" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.585570 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.588793 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.589041 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.601017 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.601051 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.601143 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.601365 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-logs\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.601516 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qn2\" (UniqueName: \"kubernetes.io/projected/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-kube-api-access-j5qn2\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.601558 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-config-data\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.703523 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qn2\" (UniqueName: \"kubernetes.io/projected/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-kube-api-access-j5qn2\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.703573 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-config-data\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.703656 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.703709 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.703816 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-logs\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.704236 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-logs\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.709296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-config-data\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.711353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.712651 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.723025 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qn2\" (UniqueName: \"kubernetes.io/projected/8c3e138f-87e1-4b20-8fba-0fa931f9e09e-kube-api-access-j5qn2\") pod \"nova-metadata-0\" (UID: \"8c3e138f-87e1-4b20-8fba-0fa931f9e09e\") " pod="openstack/nova-metadata-0" Nov 23 15:05:12 crc kubenswrapper[4718]: I1123 15:05:12.908465 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.249660 4718 generic.go:334] "Generic (PLEG): container finished" podID="9f08df78-9b3f-42cb-811f-468c669c59f2" containerID="7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08" exitCode=0 Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.249744 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f08df78-9b3f-42cb-811f-468c669c59f2","Type":"ContainerDied","Data":"7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08"} Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.398870 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.409045 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.551844 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-config-data\") pod \"9f08df78-9b3f-42cb-811f-468c669c59f2\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.551930 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2bvq\" (UniqueName: \"kubernetes.io/projected/9f08df78-9b3f-42cb-811f-468c669c59f2-kube-api-access-h2bvq\") pod \"9f08df78-9b3f-42cb-811f-468c669c59f2\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.552035 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-combined-ca-bundle\") pod \"9f08df78-9b3f-42cb-811f-468c669c59f2\" (UID: \"9f08df78-9b3f-42cb-811f-468c669c59f2\") " Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.558963 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f08df78-9b3f-42cb-811f-468c669c59f2-kube-api-access-h2bvq" (OuterVolumeSpecName: "kube-api-access-h2bvq") pod "9f08df78-9b3f-42cb-811f-468c669c59f2" (UID: "9f08df78-9b3f-42cb-811f-468c669c59f2"). InnerVolumeSpecName "kube-api-access-h2bvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.584532 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-config-data" (OuterVolumeSpecName: "config-data") pod "9f08df78-9b3f-42cb-811f-468c669c59f2" (UID: "9f08df78-9b3f-42cb-811f-468c669c59f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.592842 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f08df78-9b3f-42cb-811f-468c669c59f2" (UID: "9f08df78-9b3f-42cb-811f-468c669c59f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.654354 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.655576 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2bvq\" (UniqueName: \"kubernetes.io/projected/9f08df78-9b3f-42cb-811f-468c669c59f2-kube-api-access-h2bvq\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:13 crc kubenswrapper[4718]: I1123 15:05:13.655593 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f08df78-9b3f-42cb-811f-468c669c59f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.265303 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.265501 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f08df78-9b3f-42cb-811f-468c669c59f2","Type":"ContainerDied","Data":"3cf9eb189c2d7b22dc96c18889bec3d34839674e90cf23e0cc3a28138b421162"} Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.265733 4718 scope.go:117] "RemoveContainer" containerID="7c320ab3a9b108be25503aab9dbaa9d9ab1f995aa25a3b83becf107f57e21b08" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.270587 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c3e138f-87e1-4b20-8fba-0fa931f9e09e","Type":"ContainerStarted","Data":"a34280fe5c32c98a0fa480e517f379cd8ced88474e2f600acaa34abe72324ccd"} Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.270651 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c3e138f-87e1-4b20-8fba-0fa931f9e09e","Type":"ContainerStarted","Data":"1c7dfde0b4b01466f827accc53a8803d6f51ccf1e90b4c892cd7dcebbcea77d4"} Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.270665 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c3e138f-87e1-4b20-8fba-0fa931f9e09e","Type":"ContainerStarted","Data":"750438180a4e2ab2f4e05d831e888d9bc8936da45ed1cf7806984fbbdfa166c1"} Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.274254 4718 generic.go:334] "Generic (PLEG): container finished" podID="87c23bb7-bf1c-4139-be89-8238bb660723" containerID="13c73a4e15a892fee455490e64390494ebd1e32453b7c1bd2b57a206a66e8d87" exitCode=0 Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.274295 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87c23bb7-bf1c-4139-be89-8238bb660723","Type":"ContainerDied","Data":"13c73a4e15a892fee455490e64390494ebd1e32453b7c1bd2b57a206a66e8d87"} Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.274317 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87c23bb7-bf1c-4139-be89-8238bb660723","Type":"ContainerDied","Data":"818fb43a47830d9f9711db83fd461043dcee1b04ac34c050d02468f8ba8c475d"} Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.274330 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818fb43a47830d9f9711db83fd461043dcee1b04ac34c050d02468f8ba8c475d" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.315762 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.315732555 podStartE2EDuration="2.315732555s" podCreationTimestamp="2025-11-23 15:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:05:14.292335252 +0000 UTC m=+1165.531955096" watchObservedRunningTime="2025-11-23 15:05:14.315732555 +0000 UTC m=+1165.555352409" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.335136 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.372893 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.399548 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.407387 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:05:14 crc kubenswrapper[4718]: E1123 15:05:14.408257 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-api" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.408350 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-api" Nov 23 15:05:14 crc kubenswrapper[4718]: E1123 15:05:14.408456 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f08df78-9b3f-42cb-811f-468c669c59f2" containerName="nova-scheduler-scheduler" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.408548 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f08df78-9b3f-42cb-811f-468c669c59f2" containerName="nova-scheduler-scheduler" Nov 23 15:05:14 crc kubenswrapper[4718]: E1123 15:05:14.408703 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-log" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.408870 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-log" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.409321 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f08df78-9b3f-42cb-811f-468c669c59f2" containerName="nova-scheduler-scheduler" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.409432 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-log" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.409560 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" containerName="nova-api-api" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.410757 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.414958 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.419532 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.451497 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a07cc4-616d-45d0-a935-1a00d23f3d71" path="/var/lib/kubelet/pods/58a07cc4-616d-45d0-a935-1a00d23f3d71/volumes" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.452539 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f08df78-9b3f-42cb-811f-468c669c59f2" path="/var/lib/kubelet/pods/9f08df78-9b3f-42cb-811f-468c669c59f2/volumes" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.469980 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcsbv\" (UniqueName: \"kubernetes.io/projected/87c23bb7-bf1c-4139-be89-8238bb660723-kube-api-access-pcsbv\") pod \"87c23bb7-bf1c-4139-be89-8238bb660723\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470018 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-config-data\") pod \"87c23bb7-bf1c-4139-be89-8238bb660723\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470040 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-combined-ca-bundle\") pod \"87c23bb7-bf1c-4139-be89-8238bb660723\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470171 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c23bb7-bf1c-4139-be89-8238bb660723-logs\") pod \"87c23bb7-bf1c-4139-be89-8238bb660723\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470245 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-internal-tls-certs\") pod \"87c23bb7-bf1c-4139-be89-8238bb660723\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470305 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-public-tls-certs\") pod \"87c23bb7-bf1c-4139-be89-8238bb660723\" (UID: \"87c23bb7-bf1c-4139-be89-8238bb660723\") " Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470621 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446752b-4a28-452c-8df8-6ac8558b7754-config-data\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470769 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87n5d\" (UniqueName: \"kubernetes.io/projected/8446752b-4a28-452c-8df8-6ac8558b7754-kube-api-access-87n5d\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446752b-4a28-452c-8df8-6ac8558b7754-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.470952 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c23bb7-bf1c-4139-be89-8238bb660723-logs" (OuterVolumeSpecName: "logs") pod "87c23bb7-bf1c-4139-be89-8238bb660723" (UID: "87c23bb7-bf1c-4139-be89-8238bb660723"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.475647 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c23bb7-bf1c-4139-be89-8238bb660723-kube-api-access-pcsbv" (OuterVolumeSpecName: "kube-api-access-pcsbv") pod "87c23bb7-bf1c-4139-be89-8238bb660723" (UID: "87c23bb7-bf1c-4139-be89-8238bb660723"). InnerVolumeSpecName "kube-api-access-pcsbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.499629 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87c23bb7-bf1c-4139-be89-8238bb660723" (UID: "87c23bb7-bf1c-4139-be89-8238bb660723"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.504428 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-config-data" (OuterVolumeSpecName: "config-data") pod "87c23bb7-bf1c-4139-be89-8238bb660723" (UID: "87c23bb7-bf1c-4139-be89-8238bb660723"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.526258 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "87c23bb7-bf1c-4139-be89-8238bb660723" (UID: "87c23bb7-bf1c-4139-be89-8238bb660723"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.532745 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "87c23bb7-bf1c-4139-be89-8238bb660723" (UID: "87c23bb7-bf1c-4139-be89-8238bb660723"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.571988 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87n5d\" (UniqueName: \"kubernetes.io/projected/8446752b-4a28-452c-8df8-6ac8558b7754-kube-api-access-87n5d\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572058 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446752b-4a28-452c-8df8-6ac8558b7754-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572141 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446752b-4a28-452c-8df8-6ac8558b7754-config-data\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572256 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c23bb7-bf1c-4139-be89-8238bb660723-logs\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572269 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572287 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572299 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcsbv\" (UniqueName: \"kubernetes.io/projected/87c23bb7-bf1c-4139-be89-8238bb660723-kube-api-access-pcsbv\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572311 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.572323 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c23bb7-bf1c-4139-be89-8238bb660723-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.575656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446752b-4a28-452c-8df8-6ac8558b7754-config-data\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.576323 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446752b-4a28-452c-8df8-6ac8558b7754-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.587397 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87n5d\" (UniqueName: \"kubernetes.io/projected/8446752b-4a28-452c-8df8-6ac8558b7754-kube-api-access-87n5d\") pod \"nova-scheduler-0\" (UID: \"8446752b-4a28-452c-8df8-6ac8558b7754\") " pod="openstack/nova-scheduler-0" Nov 23 15:05:14 crc kubenswrapper[4718]: I1123 15:05:14.731680 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.171289 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 15:05:15 crc kubenswrapper[4718]: W1123 15:05:15.177751 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8446752b_4a28_452c_8df8_6ac8558b7754.slice/crio-81e2fab8c511aa32cc68c39452da50554c3790fe3838a3a5a6b51d7357e8ecac WatchSource:0}: Error finding container 81e2fab8c511aa32cc68c39452da50554c3790fe3838a3a5a6b51d7357e8ecac: Status 404 returned error can't find the container with id 81e2fab8c511aa32cc68c39452da50554c3790fe3838a3a5a6b51d7357e8ecac Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.288909 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8446752b-4a28-452c-8df8-6ac8558b7754","Type":"ContainerStarted","Data":"81e2fab8c511aa32cc68c39452da50554c3790fe3838a3a5a6b51d7357e8ecac"} Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.291047 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.343933 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.377374 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.384583 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.388322 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.396379 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.398696 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.398773 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.406191 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.489802 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7nx7\" (UniqueName: \"kubernetes.io/projected/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-kube-api-access-l7nx7\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.490124 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-public-tls-certs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.490190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-logs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.490221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.490324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-config-data\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.490702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.593325 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-config-data\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.593412 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.593551 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7nx7\" (UniqueName: \"kubernetes.io/projected/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-kube-api-access-l7nx7\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.593615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-public-tls-certs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.593652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-logs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.593707 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.594589 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-logs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.599696 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.600689 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-config-data\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.600931 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.605230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-public-tls-certs\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.613163 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7nx7\" (UniqueName: \"kubernetes.io/projected/1bc63d6b-fef8-4086-bdf2-56e1ecb469bd-kube-api-access-l7nx7\") pod \"nova-api-0\" (UID: \"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd\") " pod="openstack/nova-api-0" Nov 23 15:05:15 crc kubenswrapper[4718]: I1123 15:05:15.704886 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 15:05:16 crc kubenswrapper[4718]: I1123 15:05:16.183804 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 15:05:16 crc kubenswrapper[4718]: I1123 15:05:16.324622 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8446752b-4a28-452c-8df8-6ac8558b7754","Type":"ContainerStarted","Data":"3e86c5973bfc0508dc152b6943577394cf2c68fc79414de143947953351898fc"} Nov 23 15:05:16 crc kubenswrapper[4718]: I1123 15:05:16.325685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd","Type":"ContainerStarted","Data":"e7bfa8dc3b888c330a611fee47ea192ac111b2cff887b0ab71a7e272c8d75b45"} Nov 23 15:05:16 crc kubenswrapper[4718]: I1123 15:05:16.347641 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.347624372 podStartE2EDuration="2.347624372s" podCreationTimestamp="2025-11-23 15:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:05:16.346988714 +0000 UTC m=+1167.586608578" watchObservedRunningTime="2025-11-23 15:05:16.347624372 +0000 UTC m=+1167.587244226" Nov 23 15:05:16 crc kubenswrapper[4718]: I1123 15:05:16.451343 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c23bb7-bf1c-4139-be89-8238bb660723" path="/var/lib/kubelet/pods/87c23bb7-bf1c-4139-be89-8238bb660723/volumes" Nov 23 15:05:17 crc kubenswrapper[4718]: I1123 15:05:17.336347 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd","Type":"ContainerStarted","Data":"eb9e054021e1c8656ccf0c8813a534beb064f07ec29c6fa2e17bf2a25b1ec128"} Nov 23 15:05:17 crc kubenswrapper[4718]: I1123 15:05:17.336779 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc63d6b-fef8-4086-bdf2-56e1ecb469bd","Type":"ContainerStarted","Data":"5c21d8b371bcc2152703014bdf2ae2bc6d4ab1288888619daaee5876f27e75e9"} Nov 23 15:05:17 crc kubenswrapper[4718]: I1123 15:05:17.367712 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3676964209999998 podStartE2EDuration="2.367696421s" podCreationTimestamp="2025-11-23 15:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:05:17.360877707 +0000 UTC m=+1168.600497551" watchObservedRunningTime="2025-11-23 15:05:17.367696421 +0000 UTC m=+1168.607316265" Nov 23 15:05:17 crc kubenswrapper[4718]: I1123 15:05:17.908867 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 15:05:17 crc kubenswrapper[4718]: I1123 15:05:17.908927 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 15:05:19 crc kubenswrapper[4718]: I1123 15:05:19.732429 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 15:05:22 crc kubenswrapper[4718]: I1123 15:05:22.908876 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 15:05:22 crc kubenswrapper[4718]: I1123 15:05:22.909218 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 15:05:23 crc kubenswrapper[4718]: I1123 15:05:23.927771 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8c3e138f-87e1-4b20-8fba-0fa931f9e09e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 15:05:23 crc kubenswrapper[4718]: I1123 15:05:23.927772 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8c3e138f-87e1-4b20-8fba-0fa931f9e09e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 15:05:24 crc kubenswrapper[4718]: I1123 15:05:24.733067 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 15:05:24 crc kubenswrapper[4718]: I1123 15:05:24.777971 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 15:05:25 crc kubenswrapper[4718]: I1123 15:05:25.456227 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 15:05:25 crc kubenswrapper[4718]: I1123 15:05:25.705419 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:05:25 crc kubenswrapper[4718]: I1123 15:05:25.705530 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 15:05:25 crc kubenswrapper[4718]: I1123 15:05:25.772573 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 15:05:26 crc kubenswrapper[4718]: I1123 15:05:26.727696 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bc63d6b-fef8-4086-bdf2-56e1ecb469bd" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 15:05:26 crc kubenswrapper[4718]: I1123 15:05:26.727739 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bc63d6b-fef8-4086-bdf2-56e1ecb469bd" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 15:05:32 crc kubenswrapper[4718]: I1123 15:05:32.921726 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 15:05:32 crc kubenswrapper[4718]: I1123 15:05:32.922259 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 15:05:32 crc kubenswrapper[4718]: I1123 15:05:32.926803 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 15:05:32 crc kubenswrapper[4718]: I1123 15:05:32.928928 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 15:05:35 crc kubenswrapper[4718]: I1123 15:05:35.718520 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 15:05:35 crc kubenswrapper[4718]: I1123 15:05:35.719038 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 15:05:35 crc kubenswrapper[4718]: I1123 15:05:35.719626 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 15:05:35 crc kubenswrapper[4718]: I1123 15:05:35.719680 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 15:05:35 crc kubenswrapper[4718]: I1123 15:05:35.729097 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 15:05:35 crc kubenswrapper[4718]: I1123 15:05:35.734487 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 15:05:44 crc kubenswrapper[4718]: I1123 15:05:44.543059 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 15:05:45 crc kubenswrapper[4718]: I1123 15:05:45.859432 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 15:05:48 crc kubenswrapper[4718]: I1123 15:05:48.634210 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" containerName="rabbitmq" containerID="cri-o://fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f" gracePeriod=604796 Nov 23 15:05:49 crc kubenswrapper[4718]: I1123 15:05:49.854803 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerName="rabbitmq" containerID="cri-o://5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa" gracePeriod=604797 Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.240297 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284407 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-confd\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284497 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284538 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/177531fd-3e9f-43b3-9540-a1a59957523e-erlang-cookie-secret\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284586 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-erlang-cookie\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284617 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-config-data\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284662 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w27vj\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-kube-api-access-w27vj\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284690 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/177531fd-3e9f-43b3-9540-a1a59957523e-pod-info\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284753 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-plugins-conf\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284787 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-plugins\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284822 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-tls\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.284915 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-server-conf\") pod \"177531fd-3e9f-43b3-9540-a1a59957523e\" (UID: \"177531fd-3e9f-43b3-9540-a1a59957523e\") " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.298173 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.298249 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.302072 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.308783 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.309076 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177531fd-3e9f-43b3-9540-a1a59957523e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.310646 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-kube-api-access-w27vj" (OuterVolumeSpecName: "kube-api-access-w27vj") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "kube-api-access-w27vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.316593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.318947 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/177531fd-3e9f-43b3-9540-a1a59957523e-pod-info" (OuterVolumeSpecName: "pod-info") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.332478 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-config-data" (OuterVolumeSpecName: "config-data") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.382919 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-server-conf" (OuterVolumeSpecName: "server-conf") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.387903 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.387947 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.387957 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/177531fd-3e9f-43b3-9540-a1a59957523e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.387969 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.387978 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.387987 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w27vj\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-kube-api-access-w27vj\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.387994 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/177531fd-3e9f-43b3-9540-a1a59957523e-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.388002 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/177531fd-3e9f-43b3-9540-a1a59957523e-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.388011 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.388020 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.420295 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.463590 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "177531fd-3e9f-43b3-9540-a1a59957523e" (UID: "177531fd-3e9f-43b3-9540-a1a59957523e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.489485 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/177531fd-3e9f-43b3-9540-a1a59957523e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.489517 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.737253 4718 generic.go:334] "Generic (PLEG): container finished" podID="177531fd-3e9f-43b3-9540-a1a59957523e" containerID="fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f" exitCode=0 Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.737346 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"177531fd-3e9f-43b3-9540-a1a59957523e","Type":"ContainerDied","Data":"fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f"} Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.737373 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.737654 4718 scope.go:117] "RemoveContainer" containerID="fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.737638 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"177531fd-3e9f-43b3-9540-a1a59957523e","Type":"ContainerDied","Data":"00274cd7f93663f9688c98e0d56e09802fb290277dc13cead986582e9babba50"} Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.761226 4718 scope.go:117] "RemoveContainer" containerID="7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.788670 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.804731 4718 scope.go:117] "RemoveContainer" containerID="fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.805001 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 15:05:55 crc kubenswrapper[4718]: E1123 15:05:55.806607 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f\": container with ID starting with fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f not found: ID does not exist" containerID="fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.806654 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f"} err="failed to get container status \"fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f\": rpc error: code = NotFound desc = could not find container \"fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f\": container with ID starting with fd95031a5f7a5b7fe5a3885b0f8a94929e202d03c009ded9a53e6fded604894f not found: ID does not exist" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.806678 4718 scope.go:117] "RemoveContainer" containerID="7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2" Nov 23 15:05:55 crc kubenswrapper[4718]: E1123 15:05:55.806928 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2\": container with ID starting with 7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2 not found: ID does not exist" containerID="7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.806946 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2"} err="failed to get container status \"7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2\": rpc error: code = NotFound desc = could not find container \"7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2\": container with ID starting with 7106cb00063fa6ce3f4a7a647b44da4086d5cece529537226fea4f2ca1d965f2 not found: ID does not exist" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.818420 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 15:05:55 crc kubenswrapper[4718]: E1123 15:05:55.818859 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" containerName="setup-container" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.818878 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" containerName="setup-container" Nov 23 15:05:55 crc kubenswrapper[4718]: E1123 15:05:55.818902 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" containerName="rabbitmq" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.818910 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" containerName="rabbitmq" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.819083 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" containerName="rabbitmq" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.820084 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.822655 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.822686 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.822856 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.822961 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-np26s" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.823069 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.826422 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.827457 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.829453 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cd89fb4-b66f-4df5-940d-fe185bd5e039-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998428 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998540 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cd89fb4-b66f-4df5-940d-fe185bd5e039-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998607 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998632 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cftk\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-kube-api-access-5cftk\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998655 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998682 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998721 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:55 crc kubenswrapper[4718]: I1123 15:05:55.998744 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-config-data\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cd89fb4-b66f-4df5-940d-fe185bd5e039-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100328 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100359 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100392 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cd89fb4-b66f-4df5-940d-fe185bd5e039-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100507 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100529 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cftk\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-kube-api-access-5cftk\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100583 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100611 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100628 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-config-data\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.100860 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.101191 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.101394 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.101747 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-config-data\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.101840 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.102277 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6cd89fb4-b66f-4df5-940d-fe185bd5e039-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.105122 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6cd89fb4-b66f-4df5-940d-fe185bd5e039-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.105310 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.110592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.113105 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6cd89fb4-b66f-4df5-940d-fe185bd5e039-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.120997 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cftk\" (UniqueName: \"kubernetes.io/projected/6cd89fb4-b66f-4df5-940d-fe185bd5e039-kube-api-access-5cftk\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.135880 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6cd89fb4-b66f-4df5-940d-fe185bd5e039\") " pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.147113 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.392241 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.453689 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177531fd-3e9f-43b3-9540-a1a59957523e" path="/var/lib/kubelet/pods/177531fd-3e9f-43b3-9540-a1a59957523e/volumes" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.507528 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-tls\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.507585 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-plugins\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.507646 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.507711 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-erlang-cookie\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.507737 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-server-conf\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508465 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-confd\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508796 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-config-data\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508816 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-plugins-conf\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508832 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-pod-info\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508875 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz78q\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-kube-api-access-xz78q\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508924 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-erlang-cookie-secret\") pod \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\" (UID: \"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed\") " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508638 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.508699 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.511390 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.520796 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.521640 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-pod-info" (OuterVolumeSpecName: "pod-info") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.521701 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-kube-api-access-xz78q" (OuterVolumeSpecName: "kube-api-access-xz78q") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "kube-api-access-xz78q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.521856 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.525614 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.550632 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-config-data" (OuterVolumeSpecName: "config-data") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.567210 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-server-conf" (OuterVolumeSpecName: "server-conf") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610031 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610061 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610069 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610095 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610109 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610118 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610126 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610134 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610141 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.610150 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz78q\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-kube-api-access-xz78q\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.638860 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.642939 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" (UID: "53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.711049 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.714578 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.714837 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.757073 4718 generic.go:334] "Generic (PLEG): container finished" podID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerID="5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa" exitCode=0 Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.757146 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed","Type":"ContainerDied","Data":"5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa"} Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.757176 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed","Type":"ContainerDied","Data":"f36f91735966b1584661bc8d9b1f2b643c8f1c8255b655c5d2398ce013053f64"} Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.757197 4718 scope.go:117] "RemoveContainer" containerID="5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.757301 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.763573 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6cd89fb4-b66f-4df5-940d-fe185bd5e039","Type":"ContainerStarted","Data":"fe37250ec877337f80b740b03f1adb750e6eef9a376366d1d23126ad4905effb"} Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.776721 4718 scope.go:117] "RemoveContainer" containerID="d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.805664 4718 scope.go:117] "RemoveContainer" containerID="5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa" Nov 23 15:05:56 crc kubenswrapper[4718]: E1123 15:05:56.809617 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa\": container with ID starting with 5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa not found: ID does not exist" containerID="5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.809673 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa"} err="failed to get container status \"5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa\": rpc error: code = NotFound desc = could not find container \"5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa\": container with ID starting with 5d0058c9c9e262b1cc6d76f6c4ac30f1a078663941009c1faf2e4264331a1baa not found: ID does not exist" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.809699 4718 scope.go:117] "RemoveContainer" containerID="d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c" Nov 23 15:05:56 crc kubenswrapper[4718]: E1123 15:05:56.810110 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c\": container with ID starting with d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c not found: ID does not exist" containerID="d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.810202 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c"} err="failed to get container status \"d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c\": rpc error: code = NotFound desc = could not find container \"d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c\": container with ID starting with d6478d663a65987d90b0eec3e5169d51a136fda6e4d7dbeef356884ff9f91c2c not found: ID does not exist" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.814820 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.825259 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.834745 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 15:05:56 crc kubenswrapper[4718]: E1123 15:05:56.835223 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerName="rabbitmq" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.835246 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerName="rabbitmq" Nov 23 15:05:56 crc kubenswrapper[4718]: E1123 15:05:56.835288 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerName="setup-container" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.835300 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerName="setup-container" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.836613 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" containerName="rabbitmq" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.837926 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.840021 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dgjq4" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.840199 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.840523 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.840534 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.840629 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.840812 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.840947 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 15:05:56 crc kubenswrapper[4718]: I1123 15:05:56.859877 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 15:05:57 crc kubenswrapper[4718]: E1123 15:05:57.002976 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53aec48a_9b4c_4d78_8eaf_bce48ccfd6ed.slice/crio-f36f91735966b1584661bc8d9b1f2b643c8f1c8255b655c5d2398ce013053f64\": RecentStats: unable to find data in memory cache]" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019603 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019663 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019800 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019835 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmgx2\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-kube-api-access-tmgx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019876 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019899 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019926 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.019965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.020109 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.020157 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.122320 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmgx2\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-kube-api-access-tmgx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.122697 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.122843 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.122997 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.123480 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.123603 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.123753 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.123780 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.124470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.124090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.124873 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.125106 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.125583 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.125943 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.126103 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.126392 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.128939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.129225 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.129907 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.129925 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.135970 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.140433 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmgx2\" (UniqueName: \"kubernetes.io/projected/133d2692-e5ce-4298-89d3-6fc11ab5f0b3-kube-api-access-tmgx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.160349 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"133d2692-e5ce-4298-89d3-6fc11ab5f0b3\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.462160 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:05:57 crc kubenswrapper[4718]: I1123 15:05:57.926126 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 15:05:57 crc kubenswrapper[4718]: W1123 15:05:57.932846 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133d2692_e5ce_4298_89d3_6fc11ab5f0b3.slice/crio-8d0e4e5928406dcc5af7f67cc814f5bc8e2e33118c34f65a369c3f691129aef8 WatchSource:0}: Error finding container 8d0e4e5928406dcc5af7f67cc814f5bc8e2e33118c34f65a369c3f691129aef8: Status 404 returned error can't find the container with id 8d0e4e5928406dcc5af7f67cc814f5bc8e2e33118c34f65a369c3f691129aef8 Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.193961 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hflq4"] Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.195748 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.197368 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.207039 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hflq4"] Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.249615 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-config\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.249703 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.249761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.249994 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.250039 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxmn\" (UniqueName: \"kubernetes.io/projected/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-kube-api-access-qmxmn\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.250130 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.250175 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.352085 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-config\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.352152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.352194 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.352252 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.352280 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmxmn\" (UniqueName: \"kubernetes.io/projected/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-kube-api-access-qmxmn\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.352327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.352363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.353746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.353758 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.353820 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.353937 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.353952 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-config\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.354492 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.371398 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmxmn\" (UniqueName: \"kubernetes.io/projected/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-kube-api-access-qmxmn\") pod \"dnsmasq-dns-79bd4cc8c9-hflq4\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.453894 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed" path="/var/lib/kubelet/pods/53aec48a-9b4c-4d78-8eaf-bce48ccfd6ed/volumes" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.530734 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.790328 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"133d2692-e5ce-4298-89d3-6fc11ab5f0b3","Type":"ContainerStarted","Data":"8d0e4e5928406dcc5af7f67cc814f5bc8e2e33118c34f65a369c3f691129aef8"} Nov 23 15:05:58 crc kubenswrapper[4718]: I1123 15:05:58.792111 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6cd89fb4-b66f-4df5-940d-fe185bd5e039","Type":"ContainerStarted","Data":"3f66338918331ef9433a3836d70199c418519d7771dfa632b1cd56d7b0c707c4"} Nov 23 15:05:59 crc kubenswrapper[4718]: I1123 15:05:59.127179 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hflq4"] Nov 23 15:05:59 crc kubenswrapper[4718]: W1123 15:05:59.207520 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46d4c5ea_f65e_4551_889b_dfd5fbbcd371.slice/crio-2aca2f7f188be8f7fa82021550959465fab1f23010cb9aef7c0779f446ec5848 WatchSource:0}: Error finding container 2aca2f7f188be8f7fa82021550959465fab1f23010cb9aef7c0779f446ec5848: Status 404 returned error can't find the container with id 2aca2f7f188be8f7fa82021550959465fab1f23010cb9aef7c0779f446ec5848 Nov 23 15:05:59 crc kubenswrapper[4718]: I1123 15:05:59.806738 4718 generic.go:334] "Generic (PLEG): container finished" podID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerID="aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6" exitCode=0 Nov 23 15:05:59 crc kubenswrapper[4718]: I1123 15:05:59.807054 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" event={"ID":"46d4c5ea-f65e-4551-889b-dfd5fbbcd371","Type":"ContainerDied","Data":"aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6"} Nov 23 15:05:59 crc kubenswrapper[4718]: I1123 15:05:59.807285 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" event={"ID":"46d4c5ea-f65e-4551-889b-dfd5fbbcd371","Type":"ContainerStarted","Data":"2aca2f7f188be8f7fa82021550959465fab1f23010cb9aef7c0779f446ec5848"} Nov 23 15:06:00 crc kubenswrapper[4718]: I1123 15:06:00.816549 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" event={"ID":"46d4c5ea-f65e-4551-889b-dfd5fbbcd371","Type":"ContainerStarted","Data":"5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931"} Nov 23 15:06:00 crc kubenswrapper[4718]: I1123 15:06:00.817016 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:06:00 crc kubenswrapper[4718]: I1123 15:06:00.817913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"133d2692-e5ce-4298-89d3-6fc11ab5f0b3","Type":"ContainerStarted","Data":"74b9fb00ebbffc738fa015e26bc3da27facf754121402fa1be49f47ac3ffdd37"} Nov 23 15:06:00 crc kubenswrapper[4718]: I1123 15:06:00.838269 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" podStartSLOduration=2.838244825 podStartE2EDuration="2.838244825s" podCreationTimestamp="2025-11-23 15:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:06:00.835294604 +0000 UTC m=+1212.074914448" watchObservedRunningTime="2025-11-23 15:06:00.838244825 +0000 UTC m=+1212.077864679" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.532267 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.609999 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d6sch"] Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.610280 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" podUID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerName="dnsmasq-dns" containerID="cri-o://a8ad028a18e0e029b6893489d3bc268613d206107ad47660b2c1aa363754e251" gracePeriod=10 Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.767846 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-44tvh"] Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.769310 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.787125 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-44tvh"] Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.879021 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-config\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.879279 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.879420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.879533 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xn27\" (UniqueName: \"kubernetes.io/projected/680f2aea-fad7-47b3-aabb-06c149297a03-kube-api-access-5xn27\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.879627 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.879724 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-dns-svc\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.880255 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.914816 4718 generic.go:334] "Generic (PLEG): container finished" podID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerID="a8ad028a18e0e029b6893489d3bc268613d206107ad47660b2c1aa363754e251" exitCode=0 Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.914860 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" event={"ID":"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0","Type":"ContainerDied","Data":"a8ad028a18e0e029b6893489d3bc268613d206107ad47660b2c1aa363754e251"} Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.981986 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982037 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xn27\" (UniqueName: \"kubernetes.io/projected/680f2aea-fad7-47b3-aabb-06c149297a03-kube-api-access-5xn27\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982066 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982106 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-dns-svc\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982135 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982185 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-config\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982201 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982943 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.982996 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.983716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-dns-svc\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.984231 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-config\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.984348 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:08 crc kubenswrapper[4718]: I1123 15:06:08.984527 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/680f2aea-fad7-47b3-aabb-06c149297a03-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.004283 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xn27\" (UniqueName: \"kubernetes.io/projected/680f2aea-fad7-47b3-aabb-06c149297a03-kube-api-access-5xn27\") pod \"dnsmasq-dns-55478c4467-44tvh\" (UID: \"680f2aea-fad7-47b3-aabb-06c149297a03\") " pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.133422 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.145228 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.289212 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-config\") pod \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.289302 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-sb\") pod \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.289328 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-nb\") pod \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.289382 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-svc\") pod \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.289474 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-swift-storage-0\") pod \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.289506 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnjbx\" (UniqueName: \"kubernetes.io/projected/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-kube-api-access-bnjbx\") pod \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\" (UID: \"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0\") " Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.295611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-kube-api-access-bnjbx" (OuterVolumeSpecName: "kube-api-access-bnjbx") pod "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" (UID: "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0"). InnerVolumeSpecName "kube-api-access-bnjbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.386115 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" (UID: "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.386753 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" (UID: "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.396925 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.396971 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.396982 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnjbx\" (UniqueName: \"kubernetes.io/projected/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-kube-api-access-bnjbx\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.428197 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" (UID: "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.447007 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-config" (OuterVolumeSpecName: "config") pod "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" (UID: "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.455183 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" (UID: "f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.485665 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-44tvh"] Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.499038 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.499065 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.499090 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.925401 4718 generic.go:334] "Generic (PLEG): container finished" podID="680f2aea-fad7-47b3-aabb-06c149297a03" containerID="aa2005e93dfa2d76c96bb4502760c9b28b49e94651999fe391a87de4b2c6cdd8" exitCode=0 Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.925563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-44tvh" event={"ID":"680f2aea-fad7-47b3-aabb-06c149297a03","Type":"ContainerDied","Data":"aa2005e93dfa2d76c96bb4502760c9b28b49e94651999fe391a87de4b2c6cdd8"} Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.925772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-44tvh" event={"ID":"680f2aea-fad7-47b3-aabb-06c149297a03","Type":"ContainerStarted","Data":"ae4d82cc6451e275861d2baacdaff937cb7d15c1138a9c15df72cb8567d5e375"} Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.929291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" event={"ID":"f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0","Type":"ContainerDied","Data":"76813e2645ffad0c212c1b603d38b21de14c98215d90c1cd02eb48327b02d7ae"} Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.929341 4718 scope.go:117] "RemoveContainer" containerID="a8ad028a18e0e029b6893489d3bc268613d206107ad47660b2c1aa363754e251" Nov 23 15:06:09 crc kubenswrapper[4718]: I1123 15:06:09.929492 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-d6sch" Nov 23 15:06:10 crc kubenswrapper[4718]: I1123 15:06:10.085923 4718 scope.go:117] "RemoveContainer" containerID="3421c273d37354af91c53eb922676c9429576369efc54c6253c5d68158fdaede" Nov 23 15:06:10 crc kubenswrapper[4718]: I1123 15:06:10.125397 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d6sch"] Nov 23 15:06:10 crc kubenswrapper[4718]: I1123 15:06:10.133644 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-d6sch"] Nov 23 15:06:10 crc kubenswrapper[4718]: I1123 15:06:10.456556 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" path="/var/lib/kubelet/pods/f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0/volumes" Nov 23 15:06:10 crc kubenswrapper[4718]: I1123 15:06:10.945755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-44tvh" event={"ID":"680f2aea-fad7-47b3-aabb-06c149297a03","Type":"ContainerStarted","Data":"d43fafd7aca281b2585166522ddf7feacfacbb5b2c543774866a27275485cac3"} Nov 23 15:06:10 crc kubenswrapper[4718]: I1123 15:06:10.945959 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:10 crc kubenswrapper[4718]: I1123 15:06:10.969644 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-44tvh" podStartSLOduration=2.969616237 podStartE2EDuration="2.969616237s" podCreationTimestamp="2025-11-23 15:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:06:10.965085214 +0000 UTC m=+1222.204705098" watchObservedRunningTime="2025-11-23 15:06:10.969616237 +0000 UTC m=+1222.209236121" Nov 23 15:06:19 crc kubenswrapper[4718]: I1123 15:06:19.135683 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-44tvh" Nov 23 15:06:19 crc kubenswrapper[4718]: I1123 15:06:19.205136 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hflq4"] Nov 23 15:06:19 crc kubenswrapper[4718]: I1123 15:06:19.205385 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" podUID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerName="dnsmasq-dns" containerID="cri-o://5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931" gracePeriod=10 Nov 23 15:06:19 crc kubenswrapper[4718]: I1123 15:06:19.839296 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.023907 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-config\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.024060 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-sb\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.024096 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.024117 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-openstack-edpm-ipam\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.024148 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-svc\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.024187 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-nb\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.024258 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmxmn\" (UniqueName: \"kubernetes.io/projected/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-kube-api-access-qmxmn\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.032333 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-kube-api-access-qmxmn" (OuterVolumeSpecName: "kube-api-access-qmxmn") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371"). InnerVolumeSpecName "kube-api-access-qmxmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.051539 4718 generic.go:334] "Generic (PLEG): container finished" podID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerID="5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931" exitCode=0 Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.051664 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" event={"ID":"46d4c5ea-f65e-4551-889b-dfd5fbbcd371","Type":"ContainerDied","Data":"5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931"} Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.055947 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" event={"ID":"46d4c5ea-f65e-4551-889b-dfd5fbbcd371","Type":"ContainerDied","Data":"2aca2f7f188be8f7fa82021550959465fab1f23010cb9aef7c0779f446ec5848"} Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.051761 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-hflq4" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.055996 4718 scope.go:117] "RemoveContainer" containerID="5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.087420 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.091925 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.093033 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.097860 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-config" (OuterVolumeSpecName: "config") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:20 crc kubenswrapper[4718]: E1123 15:06:20.106325 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0 podName:46d4c5ea-f65e-4551-889b-dfd5fbbcd371 nodeName:}" failed. No retries permitted until 2025-11-23 15:06:20.606282395 +0000 UTC m=+1231.845902239 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371") : error deleting /var/lib/kubelet/pods/46d4c5ea-f65e-4551-889b-dfd5fbbcd371/volume-subpaths: remove /var/lib/kubelet/pods/46d4c5ea-f65e-4551-889b-dfd5fbbcd371/volume-subpaths: no such file or directory Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.106684 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.141616 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.141662 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.141676 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.141691 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.141703 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmxmn\" (UniqueName: \"kubernetes.io/projected/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-kube-api-access-qmxmn\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.141713 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.155490 4718 scope.go:117] "RemoveContainer" containerID="aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.193741 4718 scope.go:117] "RemoveContainer" containerID="5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931" Nov 23 15:06:20 crc kubenswrapper[4718]: E1123 15:06:20.194696 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931\": container with ID starting with 5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931 not found: ID does not exist" containerID="5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.194755 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931"} err="failed to get container status \"5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931\": rpc error: code = NotFound desc = could not find container \"5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931\": container with ID starting with 5588e00de497f9069c223fc2408c16120c1be1d8df9b23ebc433c5396414e931 not found: ID does not exist" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.194790 4718 scope.go:117] "RemoveContainer" containerID="aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6" Nov 23 15:06:20 crc kubenswrapper[4718]: E1123 15:06:20.195339 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6\": container with ID starting with aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6 not found: ID does not exist" containerID="aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.195524 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6"} err="failed to get container status \"aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6\": rpc error: code = NotFound desc = could not find container \"aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6\": container with ID starting with aece8c81572f78fd754416a095c53ba896da70afa29cb213cdf23027ad5f9fd6 not found: ID does not exist" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.668057 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0\") pod \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\" (UID: \"46d4c5ea-f65e-4551-889b-dfd5fbbcd371\") " Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.668828 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46d4c5ea-f65e-4551-889b-dfd5fbbcd371" (UID: "46d4c5ea-f65e-4551-889b-dfd5fbbcd371"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.669915 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46d4c5ea-f65e-4551-889b-dfd5fbbcd371-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:20 crc kubenswrapper[4718]: I1123 15:06:20.993830 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hflq4"] Nov 23 15:06:21 crc kubenswrapper[4718]: I1123 15:06:21.001307 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-hflq4"] Nov 23 15:06:22 crc kubenswrapper[4718]: I1123 15:06:22.458626 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" path="/var/lib/kubelet/pods/46d4c5ea-f65e-4551-889b-dfd5fbbcd371/volumes" Nov 23 15:06:31 crc kubenswrapper[4718]: I1123 15:06:31.216584 4718 generic.go:334] "Generic (PLEG): container finished" podID="6cd89fb4-b66f-4df5-940d-fe185bd5e039" containerID="3f66338918331ef9433a3836d70199c418519d7771dfa632b1cd56d7b0c707c4" exitCode=0 Nov 23 15:06:31 crc kubenswrapper[4718]: I1123 15:06:31.216657 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6cd89fb4-b66f-4df5-940d-fe185bd5e039","Type":"ContainerDied","Data":"3f66338918331ef9433a3836d70199c418519d7771dfa632b1cd56d7b0c707c4"} Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.235189 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6cd89fb4-b66f-4df5-940d-fe185bd5e039","Type":"ContainerStarted","Data":"d1d806ac62cf82661b24c3ee55a7486b6c5bf2c8407d8ca48d039446732ea827"} Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.235788 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.278790 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.278771174 podStartE2EDuration="37.278771174s" podCreationTimestamp="2025-11-23 15:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:06:32.264042985 +0000 UTC m=+1243.503662849" watchObservedRunningTime="2025-11-23 15:06:32.278771174 +0000 UTC m=+1243.518391018" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.413681 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h"] Nov 23 15:06:32 crc kubenswrapper[4718]: E1123 15:06:32.414564 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerName="dnsmasq-dns" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.414594 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerName="dnsmasq-dns" Nov 23 15:06:32 crc kubenswrapper[4718]: E1123 15:06:32.414620 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerName="dnsmasq-dns" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.414632 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerName="dnsmasq-dns" Nov 23 15:06:32 crc kubenswrapper[4718]: E1123 15:06:32.414651 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerName="init" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.414663 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerName="init" Nov 23 15:06:32 crc kubenswrapper[4718]: E1123 15:06:32.414702 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerName="init" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.414713 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerName="init" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.415053 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c1c2b4-4f68-4ed1-bcc8-72316c595aa0" containerName="dnsmasq-dns" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.415091 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d4c5ea-f65e-4551-889b-dfd5fbbcd371" containerName="dnsmasq-dns" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.415964 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.421431 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.421638 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.429681 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.429681 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.430677 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h"] Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.540366 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.540534 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2fzr\" (UniqueName: \"kubernetes.io/projected/834d3c5a-a503-42a6-a71d-8e00fe358ec6-kube-api-access-g2fzr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.540554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.540590 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.642363 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.642704 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2fzr\" (UniqueName: \"kubernetes.io/projected/834d3c5a-a503-42a6-a71d-8e00fe358ec6-kube-api-access-g2fzr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.642794 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.642947 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.648385 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.648724 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.649653 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.666157 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2fzr\" (UniqueName: \"kubernetes.io/projected/834d3c5a-a503-42a6-a71d-8e00fe358ec6-kube-api-access-g2fzr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:32 crc kubenswrapper[4718]: I1123 15:06:32.750156 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:33 crc kubenswrapper[4718]: I1123 15:06:33.247072 4718 generic.go:334] "Generic (PLEG): container finished" podID="133d2692-e5ce-4298-89d3-6fc11ab5f0b3" containerID="74b9fb00ebbffc738fa015e26bc3da27facf754121402fa1be49f47ac3ffdd37" exitCode=0 Nov 23 15:06:33 crc kubenswrapper[4718]: I1123 15:06:33.247156 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"133d2692-e5ce-4298-89d3-6fc11ab5f0b3","Type":"ContainerDied","Data":"74b9fb00ebbffc738fa015e26bc3da27facf754121402fa1be49f47ac3ffdd37"} Nov 23 15:06:33 crc kubenswrapper[4718]: W1123 15:06:33.278222 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod834d3c5a_a503_42a6_a71d_8e00fe358ec6.slice/crio-630bb9c17a95f4272596fb57c48e1ca88cd3b47cae1fcb94fce25ba29693df04 WatchSource:0}: Error finding container 630bb9c17a95f4272596fb57c48e1ca88cd3b47cae1fcb94fce25ba29693df04: Status 404 returned error can't find the container with id 630bb9c17a95f4272596fb57c48e1ca88cd3b47cae1fcb94fce25ba29693df04 Nov 23 15:06:33 crc kubenswrapper[4718]: I1123 15:06:33.283123 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h"] Nov 23 15:06:34 crc kubenswrapper[4718]: I1123 15:06:34.260086 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"133d2692-e5ce-4298-89d3-6fc11ab5f0b3","Type":"ContainerStarted","Data":"4a3fd233ecbbc53693216799d0c7d82c6ac2884f1ac37de89f750ba1b1ce4484"} Nov 23 15:06:34 crc kubenswrapper[4718]: I1123 15:06:34.261603 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:06:34 crc kubenswrapper[4718]: I1123 15:06:34.262609 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" event={"ID":"834d3c5a-a503-42a6-a71d-8e00fe358ec6","Type":"ContainerStarted","Data":"630bb9c17a95f4272596fb57c48e1ca88cd3b47cae1fcb94fce25ba29693df04"} Nov 23 15:06:34 crc kubenswrapper[4718]: I1123 15:06:34.289892 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.289873237 podStartE2EDuration="38.289873237s" podCreationTimestamp="2025-11-23 15:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:06:34.283534987 +0000 UTC m=+1245.523154871" watchObservedRunningTime="2025-11-23 15:06:34.289873237 +0000 UTC m=+1245.529493081" Nov 23 15:06:42 crc kubenswrapper[4718]: I1123 15:06:42.233994 4718 scope.go:117] "RemoveContainer" containerID="4d189af6aaaa28c4f6206b731bf20e4da52974ba5c8be4e5a72009ab629998c4" Nov 23 15:06:43 crc kubenswrapper[4718]: I1123 15:06:43.349825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" event={"ID":"834d3c5a-a503-42a6-a71d-8e00fe358ec6","Type":"ContainerStarted","Data":"7dcf5773fa04b2a19471482cd67d76aa786ab897b7783ee4c532b560d602a931"} Nov 23 15:06:43 crc kubenswrapper[4718]: I1123 15:06:43.371564 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" podStartSLOduration=2.536930669 podStartE2EDuration="11.371545589s" podCreationTimestamp="2025-11-23 15:06:32 +0000 UTC" firstStartedPulling="2025-11-23 15:06:33.281323079 +0000 UTC m=+1244.520942923" lastFinishedPulling="2025-11-23 15:06:42.115937959 +0000 UTC m=+1253.355557843" observedRunningTime="2025-11-23 15:06:43.365582188 +0000 UTC m=+1254.605202052" watchObservedRunningTime="2025-11-23 15:06:43.371545589 +0000 UTC m=+1254.611165433" Nov 23 15:06:46 crc kubenswrapper[4718]: I1123 15:06:46.151601 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 15:06:47 crc kubenswrapper[4718]: I1123 15:06:47.465516 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 15:06:47 crc kubenswrapper[4718]: I1123 15:06:47.674001 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-589b8777c9-j8mvv" podUID="62536478-1337-4bad-b5e3-77cf6dd4d54b" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 23 15:06:53 crc kubenswrapper[4718]: I1123 15:06:53.053134 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:06:53 crc kubenswrapper[4718]: I1123 15:06:53.053756 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:06:53 crc kubenswrapper[4718]: I1123 15:06:53.486075 4718 generic.go:334] "Generic (PLEG): container finished" podID="834d3c5a-a503-42a6-a71d-8e00fe358ec6" containerID="7dcf5773fa04b2a19471482cd67d76aa786ab897b7783ee4c532b560d602a931" exitCode=0 Nov 23 15:06:53 crc kubenswrapper[4718]: I1123 15:06:53.486139 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" event={"ID":"834d3c5a-a503-42a6-a71d-8e00fe358ec6","Type":"ContainerDied","Data":"7dcf5773fa04b2a19471482cd67d76aa786ab897b7783ee4c532b560d602a931"} Nov 23 15:06:54 crc kubenswrapper[4718]: I1123 15:06:54.928458 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.028122 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-ssh-key\") pod \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.028204 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2fzr\" (UniqueName: \"kubernetes.io/projected/834d3c5a-a503-42a6-a71d-8e00fe358ec6-kube-api-access-g2fzr\") pod \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.028373 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-repo-setup-combined-ca-bundle\") pod \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.028474 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-inventory\") pod \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\" (UID: \"834d3c5a-a503-42a6-a71d-8e00fe358ec6\") " Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.033170 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834d3c5a-a503-42a6-a71d-8e00fe358ec6-kube-api-access-g2fzr" (OuterVolumeSpecName: "kube-api-access-g2fzr") pod "834d3c5a-a503-42a6-a71d-8e00fe358ec6" (UID: "834d3c5a-a503-42a6-a71d-8e00fe358ec6"). InnerVolumeSpecName "kube-api-access-g2fzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.033721 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "834d3c5a-a503-42a6-a71d-8e00fe358ec6" (UID: "834d3c5a-a503-42a6-a71d-8e00fe358ec6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.059137 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "834d3c5a-a503-42a6-a71d-8e00fe358ec6" (UID: "834d3c5a-a503-42a6-a71d-8e00fe358ec6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.078214 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-inventory" (OuterVolumeSpecName: "inventory") pod "834d3c5a-a503-42a6-a71d-8e00fe358ec6" (UID: "834d3c5a-a503-42a6-a71d-8e00fe358ec6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.134343 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.134398 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2fzr\" (UniqueName: \"kubernetes.io/projected/834d3c5a-a503-42a6-a71d-8e00fe358ec6-kube-api-access-g2fzr\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.134419 4718 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.134462 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834d3c5a-a503-42a6-a71d-8e00fe358ec6-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.541762 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" event={"ID":"834d3c5a-a503-42a6-a71d-8e00fe358ec6","Type":"ContainerDied","Data":"630bb9c17a95f4272596fb57c48e1ca88cd3b47cae1fcb94fce25ba29693df04"} Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.542184 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630bb9c17a95f4272596fb57c48e1ca88cd3b47cae1fcb94fce25ba29693df04" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.542264 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.615294 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx"] Nov 23 15:06:55 crc kubenswrapper[4718]: E1123 15:06:55.615974 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834d3c5a-a503-42a6-a71d-8e00fe358ec6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.616006 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="834d3c5a-a503-42a6-a71d-8e00fe358ec6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.616350 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="834d3c5a-a503-42a6-a71d-8e00fe358ec6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.617180 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.619749 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.619913 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.620565 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.621403 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.625225 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx"] Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.651103 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.651258 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.651320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldrm\" (UniqueName: \"kubernetes.io/projected/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-kube-api-access-xldrm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.752719 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.752800 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xldrm\" (UniqueName: \"kubernetes.io/projected/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-kube-api-access-xldrm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.752840 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.757001 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.759000 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.769561 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xldrm\" (UniqueName: \"kubernetes.io/projected/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-kube-api-access-xldrm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-km6kx\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:55 crc kubenswrapper[4718]: I1123 15:06:55.940499 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:06:56 crc kubenswrapper[4718]: I1123 15:06:56.309098 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx"] Nov 23 15:06:56 crc kubenswrapper[4718]: W1123 15:06:56.314061 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74d2f450_1dcb_42ee_9b6f_da7389ddc9ce.slice/crio-cb59a6e99ef3f3b84f67744d48632646df3825dddea04e8351fa2ff17a4ef021 WatchSource:0}: Error finding container cb59a6e99ef3f3b84f67744d48632646df3825dddea04e8351fa2ff17a4ef021: Status 404 returned error can't find the container with id cb59a6e99ef3f3b84f67744d48632646df3825dddea04e8351fa2ff17a4ef021 Nov 23 15:06:56 crc kubenswrapper[4718]: I1123 15:06:56.553305 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" event={"ID":"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce","Type":"ContainerStarted","Data":"cb59a6e99ef3f3b84f67744d48632646df3825dddea04e8351fa2ff17a4ef021"} Nov 23 15:06:57 crc kubenswrapper[4718]: I1123 15:06:57.569704 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" event={"ID":"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce","Type":"ContainerStarted","Data":"faf90037bd9fb00f3953cf67251953e1110cba4d3eae37ea27a306a5cbcef126"} Nov 23 15:06:57 crc kubenswrapper[4718]: I1123 15:06:57.610782 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" podStartSLOduration=2.170113917 podStartE2EDuration="2.610759966s" podCreationTimestamp="2025-11-23 15:06:55 +0000 UTC" firstStartedPulling="2025-11-23 15:06:56.316718726 +0000 UTC m=+1267.556338570" lastFinishedPulling="2025-11-23 15:06:56.757364775 +0000 UTC m=+1267.996984619" observedRunningTime="2025-11-23 15:06:57.597940739 +0000 UTC m=+1268.837560623" watchObservedRunningTime="2025-11-23 15:06:57.610759966 +0000 UTC m=+1268.850379830" Nov 23 15:06:59 crc kubenswrapper[4718]: I1123 15:06:59.589687 4718 generic.go:334] "Generic (PLEG): container finished" podID="74d2f450-1dcb-42ee-9b6f-da7389ddc9ce" containerID="faf90037bd9fb00f3953cf67251953e1110cba4d3eae37ea27a306a5cbcef126" exitCode=0 Nov 23 15:06:59 crc kubenswrapper[4718]: I1123 15:06:59.589684 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" event={"ID":"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce","Type":"ContainerDied","Data":"faf90037bd9fb00f3953cf67251953e1110cba4d3eae37ea27a306a5cbcef126"} Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.089036 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.272890 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-inventory\") pod \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.272985 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xldrm\" (UniqueName: \"kubernetes.io/projected/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-kube-api-access-xldrm\") pod \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.273097 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-ssh-key\") pod \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\" (UID: \"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce\") " Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.282720 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-kube-api-access-xldrm" (OuterVolumeSpecName: "kube-api-access-xldrm") pod "74d2f450-1dcb-42ee-9b6f-da7389ddc9ce" (UID: "74d2f450-1dcb-42ee-9b6f-da7389ddc9ce"). InnerVolumeSpecName "kube-api-access-xldrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.308303 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "74d2f450-1dcb-42ee-9b6f-da7389ddc9ce" (UID: "74d2f450-1dcb-42ee-9b6f-da7389ddc9ce"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.312691 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-inventory" (OuterVolumeSpecName: "inventory") pod "74d2f450-1dcb-42ee-9b6f-da7389ddc9ce" (UID: "74d2f450-1dcb-42ee-9b6f-da7389ddc9ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.375489 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.375535 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xldrm\" (UniqueName: \"kubernetes.io/projected/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-kube-api-access-xldrm\") on node \"crc\" DevicePath \"\"" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.375555 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d2f450-1dcb-42ee-9b6f-da7389ddc9ce-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.614920 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" event={"ID":"74d2f450-1dcb-42ee-9b6f-da7389ddc9ce","Type":"ContainerDied","Data":"cb59a6e99ef3f3b84f67744d48632646df3825dddea04e8351fa2ff17a4ef021"} Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.614981 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb59a6e99ef3f3b84f67744d48632646df3825dddea04e8351fa2ff17a4ef021" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.615067 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-km6kx" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.692737 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z"] Nov 23 15:07:01 crc kubenswrapper[4718]: E1123 15:07:01.693367 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d2f450-1dcb-42ee-9b6f-da7389ddc9ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.693399 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d2f450-1dcb-42ee-9b6f-da7389ddc9ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.693777 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d2f450-1dcb-42ee-9b6f-da7389ddc9ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.694803 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.698238 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.698863 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.700641 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.700714 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.720564 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z"] Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.783603 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fc2\" (UniqueName: \"kubernetes.io/projected/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-kube-api-access-g9fc2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.783759 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.783832 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.783874 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.885986 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fc2\" (UniqueName: \"kubernetes.io/projected/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-kube-api-access-g9fc2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.886163 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.886232 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.886266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.890662 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.891895 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.895305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:01 crc kubenswrapper[4718]: I1123 15:07:01.922251 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fc2\" (UniqueName: \"kubernetes.io/projected/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-kube-api-access-g9fc2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:02 crc kubenswrapper[4718]: I1123 15:07:02.013873 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:07:02 crc kubenswrapper[4718]: I1123 15:07:02.353370 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z"] Nov 23 15:07:02 crc kubenswrapper[4718]: I1123 15:07:02.627092 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" event={"ID":"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c","Type":"ContainerStarted","Data":"598646cba9164cc009fcba12b99617f58a83f247ccf5afb80a82c6e32348e920"} Nov 23 15:07:03 crc kubenswrapper[4718]: I1123 15:07:03.645972 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" event={"ID":"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c","Type":"ContainerStarted","Data":"bdb981d88e2c73941f87ffff8865161e1a203efd9fc5372f4abec5f449d2f230"} Nov 23 15:07:03 crc kubenswrapper[4718]: I1123 15:07:03.682966 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" podStartSLOduration=1.996998238 podStartE2EDuration="2.68292999s" podCreationTimestamp="2025-11-23 15:07:01 +0000 UTC" firstStartedPulling="2025-11-23 15:07:02.357053739 +0000 UTC m=+1273.596673593" lastFinishedPulling="2025-11-23 15:07:03.042985511 +0000 UTC m=+1274.282605345" observedRunningTime="2025-11-23 15:07:03.667199034 +0000 UTC m=+1274.906818928" watchObservedRunningTime="2025-11-23 15:07:03.68292999 +0000 UTC m=+1274.922549874" Nov 23 15:07:23 crc kubenswrapper[4718]: I1123 15:07:23.053123 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:07:23 crc kubenswrapper[4718]: I1123 15:07:23.053847 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:07:42 crc kubenswrapper[4718]: I1123 15:07:42.405900 4718 scope.go:117] "RemoveContainer" containerID="613c99dd8879460c8944a9074c031bda1a4a28442f8d718fc1b0f8964c7eaef7" Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.054004 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.054682 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.054737 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.055597 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1501bec9fe9ec98aacf1e278e6a359530da30674903e2cad276cc832e866bc17"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.055675 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://1501bec9fe9ec98aacf1e278e6a359530da30674903e2cad276cc832e866bc17" gracePeriod=600 Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.225046 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="1501bec9fe9ec98aacf1e278e6a359530da30674903e2cad276cc832e866bc17" exitCode=0 Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.225126 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"1501bec9fe9ec98aacf1e278e6a359530da30674903e2cad276cc832e866bc17"} Nov 23 15:07:53 crc kubenswrapper[4718]: I1123 15:07:53.225174 4718 scope.go:117] "RemoveContainer" containerID="71577aa824008968487c33bc21787ce3eb07e3b6f70d6cf5aac37a6881128f0a" Nov 23 15:07:54 crc kubenswrapper[4718]: I1123 15:07:54.242377 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c"} Nov 23 15:08:42 crc kubenswrapper[4718]: I1123 15:08:42.465227 4718 scope.go:117] "RemoveContainer" containerID="5bda54d511db35bf484b5e54da4bec5107a0f415da4439587098774398852ae3" Nov 23 15:08:42 crc kubenswrapper[4718]: I1123 15:08:42.489916 4718 scope.go:117] "RemoveContainer" containerID="c1aef4a1eaac963c42a4b7c228b213bbd1ac43b742d2cfab6a9c91930bb920ab" Nov 23 15:08:42 crc kubenswrapper[4718]: I1123 15:08:42.510583 4718 scope.go:117] "RemoveContainer" containerID="2820e29c4f114c25ae7affc037dfd61dbe6ddc97552d4efe172c435a40f49260" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.483550 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-57t5w"] Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.488072 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.495255 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57t5w"] Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.615732 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jgb\" (UniqueName: \"kubernetes.io/projected/e40fe77c-edaa-4a1a-9526-5c6c7399d890-kube-api-access-c8jgb\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.615810 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-catalog-content\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.615898 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-utilities\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.718062 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-catalog-content\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.718385 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-utilities\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.718577 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jgb\" (UniqueName: \"kubernetes.io/projected/e40fe77c-edaa-4a1a-9526-5c6c7399d890-kube-api-access-c8jgb\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.718627 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-catalog-content\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.718842 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-utilities\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.742986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jgb\" (UniqueName: \"kubernetes.io/projected/e40fe77c-edaa-4a1a-9526-5c6c7399d890-kube-api-access-c8jgb\") pod \"redhat-marketplace-57t5w\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:23 crc kubenswrapper[4718]: I1123 15:09:23.832069 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:24 crc kubenswrapper[4718]: I1123 15:09:24.310940 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57t5w"] Nov 23 15:09:25 crc kubenswrapper[4718]: I1123 15:09:25.309942 4718 generic.go:334] "Generic (PLEG): container finished" podID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerID="93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257" exitCode=0 Nov 23 15:09:25 crc kubenswrapper[4718]: I1123 15:09:25.310035 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57t5w" event={"ID":"e40fe77c-edaa-4a1a-9526-5c6c7399d890","Type":"ContainerDied","Data":"93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257"} Nov 23 15:09:25 crc kubenswrapper[4718]: I1123 15:09:25.310299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57t5w" event={"ID":"e40fe77c-edaa-4a1a-9526-5c6c7399d890","Type":"ContainerStarted","Data":"4f547fb500fe38519fbf3ae918644785a2965349037bb83c45bb4c25f1a57957"} Nov 23 15:09:27 crc kubenswrapper[4718]: I1123 15:09:27.330156 4718 generic.go:334] "Generic (PLEG): container finished" podID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerID="a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528" exitCode=0 Nov 23 15:09:27 crc kubenswrapper[4718]: I1123 15:09:27.330218 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57t5w" event={"ID":"e40fe77c-edaa-4a1a-9526-5c6c7399d890","Type":"ContainerDied","Data":"a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528"} Nov 23 15:09:28 crc kubenswrapper[4718]: I1123 15:09:28.343274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57t5w" event={"ID":"e40fe77c-edaa-4a1a-9526-5c6c7399d890","Type":"ContainerStarted","Data":"5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc"} Nov 23 15:09:28 crc kubenswrapper[4718]: I1123 15:09:28.370168 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-57t5w" podStartSLOduration=2.9663470309999997 podStartE2EDuration="5.370146429s" podCreationTimestamp="2025-11-23 15:09:23 +0000 UTC" firstStartedPulling="2025-11-23 15:09:25.312294965 +0000 UTC m=+1416.551914859" lastFinishedPulling="2025-11-23 15:09:27.716094413 +0000 UTC m=+1418.955714257" observedRunningTime="2025-11-23 15:09:28.362526013 +0000 UTC m=+1419.602145857" watchObservedRunningTime="2025-11-23 15:09:28.370146429 +0000 UTC m=+1419.609766273" Nov 23 15:09:33 crc kubenswrapper[4718]: I1123 15:09:33.833057 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:33 crc kubenswrapper[4718]: I1123 15:09:33.833822 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:33 crc kubenswrapper[4718]: I1123 15:09:33.894843 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:34 crc kubenswrapper[4718]: I1123 15:09:34.480880 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:34 crc kubenswrapper[4718]: I1123 15:09:34.523043 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-57t5w"] Nov 23 15:09:36 crc kubenswrapper[4718]: I1123 15:09:36.415630 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-57t5w" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="registry-server" containerID="cri-o://5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc" gracePeriod=2 Nov 23 15:09:36 crc kubenswrapper[4718]: I1123 15:09:36.993408 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.166796 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-utilities\") pod \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.166903 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jgb\" (UniqueName: \"kubernetes.io/projected/e40fe77c-edaa-4a1a-9526-5c6c7399d890-kube-api-access-c8jgb\") pod \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.166969 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-catalog-content\") pod \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\" (UID: \"e40fe77c-edaa-4a1a-9526-5c6c7399d890\") " Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.167808 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-utilities" (OuterVolumeSpecName: "utilities") pod "e40fe77c-edaa-4a1a-9526-5c6c7399d890" (UID: "e40fe77c-edaa-4a1a-9526-5c6c7399d890"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.179415 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40fe77c-edaa-4a1a-9526-5c6c7399d890-kube-api-access-c8jgb" (OuterVolumeSpecName: "kube-api-access-c8jgb") pod "e40fe77c-edaa-4a1a-9526-5c6c7399d890" (UID: "e40fe77c-edaa-4a1a-9526-5c6c7399d890"). InnerVolumeSpecName "kube-api-access-c8jgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.200233 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e40fe77c-edaa-4a1a-9526-5c6c7399d890" (UID: "e40fe77c-edaa-4a1a-9526-5c6c7399d890"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.269786 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.269825 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jgb\" (UniqueName: \"kubernetes.io/projected/e40fe77c-edaa-4a1a-9526-5c6c7399d890-kube-api-access-c8jgb\") on node \"crc\" DevicePath \"\"" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.269840 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40fe77c-edaa-4a1a-9526-5c6c7399d890-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.430409 4718 generic.go:334] "Generic (PLEG): container finished" podID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerID="5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc" exitCode=0 Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.430479 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57t5w" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.430512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57t5w" event={"ID":"e40fe77c-edaa-4a1a-9526-5c6c7399d890","Type":"ContainerDied","Data":"5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc"} Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.431737 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57t5w" event={"ID":"e40fe77c-edaa-4a1a-9526-5c6c7399d890","Type":"ContainerDied","Data":"4f547fb500fe38519fbf3ae918644785a2965349037bb83c45bb4c25f1a57957"} Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.431799 4718 scope.go:117] "RemoveContainer" containerID="5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.463146 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-57t5w"] Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.472291 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-57t5w"] Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.498122 4718 scope.go:117] "RemoveContainer" containerID="a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.527109 4718 scope.go:117] "RemoveContainer" containerID="93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.567062 4718 scope.go:117] "RemoveContainer" containerID="5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc" Nov 23 15:09:37 crc kubenswrapper[4718]: E1123 15:09:37.567431 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc\": container with ID starting with 5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc not found: ID does not exist" containerID="5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.567476 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc"} err="failed to get container status \"5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc\": rpc error: code = NotFound desc = could not find container \"5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc\": container with ID starting with 5151b37ba4e7f23dcdc460d67ebd24b36ea747d04b229e1c198c7ae37e37aecc not found: ID does not exist" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.567496 4718 scope.go:117] "RemoveContainer" containerID="a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528" Nov 23 15:09:37 crc kubenswrapper[4718]: E1123 15:09:37.567992 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528\": container with ID starting with a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528 not found: ID does not exist" containerID="a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.568050 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528"} err="failed to get container status \"a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528\": rpc error: code = NotFound desc = could not find container \"a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528\": container with ID starting with a14afc886dd4b0d90577844609a341fa2baf25af08d9d3945003d9e1e77af528 not found: ID does not exist" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.568085 4718 scope.go:117] "RemoveContainer" containerID="93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257" Nov 23 15:09:37 crc kubenswrapper[4718]: E1123 15:09:37.568571 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257\": container with ID starting with 93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257 not found: ID does not exist" containerID="93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257" Nov 23 15:09:37 crc kubenswrapper[4718]: I1123 15:09:37.568610 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257"} err="failed to get container status \"93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257\": rpc error: code = NotFound desc = could not find container \"93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257\": container with ID starting with 93d11a9448aa71c6cb189ab742088900679fe720b19b22b786dea992c2831257 not found: ID does not exist" Nov 23 15:09:38 crc kubenswrapper[4718]: I1123 15:09:38.458779 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" path="/var/lib/kubelet/pods/e40fe77c-edaa-4a1a-9526-5c6c7399d890/volumes" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.663649 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hb44c"] Nov 23 15:09:45 crc kubenswrapper[4718]: E1123 15:09:45.664862 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="extract-content" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.664882 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="extract-content" Nov 23 15:09:45 crc kubenswrapper[4718]: E1123 15:09:45.664895 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="extract-utilities" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.664904 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="extract-utilities" Nov 23 15:09:45 crc kubenswrapper[4718]: E1123 15:09:45.664941 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="registry-server" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.664952 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="registry-server" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.665204 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40fe77c-edaa-4a1a-9526-5c6c7399d890" containerName="registry-server" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.667042 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.703487 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hb44c"] Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.758117 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-utilities\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.758292 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7smb\" (UniqueName: \"kubernetes.io/projected/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-kube-api-access-q7smb\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.758353 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-catalog-content\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.859810 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-utilities\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.859912 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7smb\" (UniqueName: \"kubernetes.io/projected/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-kube-api-access-q7smb\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.859949 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-catalog-content\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.860425 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-catalog-content\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.860459 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-utilities\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.879351 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7smb\" (UniqueName: \"kubernetes.io/projected/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-kube-api-access-q7smb\") pod \"certified-operators-hb44c\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:45 crc kubenswrapper[4718]: I1123 15:09:45.998159 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:46 crc kubenswrapper[4718]: I1123 15:09:46.501292 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hb44c"] Nov 23 15:09:46 crc kubenswrapper[4718]: I1123 15:09:46.519832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hb44c" event={"ID":"b496519f-c8be-45cb-ae47-ea6f4bf15e1a","Type":"ContainerStarted","Data":"6e2e278dd8d9cc261b5a0f5c57fd5bbed003028e1d8d077c025aec8099180f47"} Nov 23 15:09:47 crc kubenswrapper[4718]: I1123 15:09:47.533772 4718 generic.go:334] "Generic (PLEG): container finished" podID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerID="3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a" exitCode=0 Nov 23 15:09:47 crc kubenswrapper[4718]: I1123 15:09:47.533968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hb44c" event={"ID":"b496519f-c8be-45cb-ae47-ea6f4bf15e1a","Type":"ContainerDied","Data":"3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a"} Nov 23 15:09:47 crc kubenswrapper[4718]: I1123 15:09:47.537512 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 15:09:48 crc kubenswrapper[4718]: I1123 15:09:48.560546 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hb44c" event={"ID":"b496519f-c8be-45cb-ae47-ea6f4bf15e1a","Type":"ContainerStarted","Data":"cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719"} Nov 23 15:09:49 crc kubenswrapper[4718]: I1123 15:09:49.576838 4718 generic.go:334] "Generic (PLEG): container finished" podID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerID="cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719" exitCode=0 Nov 23 15:09:49 crc kubenswrapper[4718]: I1123 15:09:49.576880 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hb44c" event={"ID":"b496519f-c8be-45cb-ae47-ea6f4bf15e1a","Type":"ContainerDied","Data":"cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719"} Nov 23 15:09:50 crc kubenswrapper[4718]: I1123 15:09:50.589652 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hb44c" event={"ID":"b496519f-c8be-45cb-ae47-ea6f4bf15e1a","Type":"ContainerStarted","Data":"4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f"} Nov 23 15:09:50 crc kubenswrapper[4718]: I1123 15:09:50.612554 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hb44c" podStartSLOduration=3.038872255 podStartE2EDuration="5.612533889s" podCreationTimestamp="2025-11-23 15:09:45 +0000 UTC" firstStartedPulling="2025-11-23 15:09:47.537163374 +0000 UTC m=+1438.776783238" lastFinishedPulling="2025-11-23 15:09:50.110825018 +0000 UTC m=+1441.350444872" observedRunningTime="2025-11-23 15:09:50.604200637 +0000 UTC m=+1441.843820501" watchObservedRunningTime="2025-11-23 15:09:50.612533889 +0000 UTC m=+1441.852153733" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.042359 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mrn78"] Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.044626 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.058177 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrn78"] Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.072101 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-utilities\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.072162 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-catalog-content\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.072271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt6jv\" (UniqueName: \"kubernetes.io/projected/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-kube-api-access-qt6jv\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.174844 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-utilities\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.174897 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-catalog-content\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.175148 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt6jv\" (UniqueName: \"kubernetes.io/projected/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-kube-api-access-qt6jv\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.175428 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-utilities\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.175543 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-catalog-content\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.211520 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt6jv\" (UniqueName: \"kubernetes.io/projected/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-kube-api-access-qt6jv\") pod \"redhat-operators-mrn78\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.395759 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:09:51 crc kubenswrapper[4718]: I1123 15:09:51.897496 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrn78"] Nov 23 15:09:52 crc kubenswrapper[4718]: I1123 15:09:52.614722 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerID="2db80d08be4ed60c7ad58302e7ac05f86b7ca4a98e8c167380d1b8ef50e55cd4" exitCode=0 Nov 23 15:09:52 crc kubenswrapper[4718]: I1123 15:09:52.614767 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn78" event={"ID":"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd","Type":"ContainerDied","Data":"2db80d08be4ed60c7ad58302e7ac05f86b7ca4a98e8c167380d1b8ef50e55cd4"} Nov 23 15:09:52 crc kubenswrapper[4718]: I1123 15:09:52.615051 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn78" event={"ID":"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd","Type":"ContainerStarted","Data":"96ab1920dbbb3fc611c0bfefc2ba00a2c9ea58b70feb55af946c18db9b8bd085"} Nov 23 15:09:53 crc kubenswrapper[4718]: I1123 15:09:53.052569 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:09:53 crc kubenswrapper[4718]: I1123 15:09:53.052631 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:09:53 crc kubenswrapper[4718]: I1123 15:09:53.625314 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn78" event={"ID":"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd","Type":"ContainerStarted","Data":"282ff60d803aa88a8539d4e3ab34dca8484ab03c820859c219074381e7a51bc9"} Nov 23 15:09:55 crc kubenswrapper[4718]: I1123 15:09:55.999083 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:55 crc kubenswrapper[4718]: I1123 15:09:55.999418 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:56 crc kubenswrapper[4718]: I1123 15:09:56.056941 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:56 crc kubenswrapper[4718]: I1123 15:09:56.657654 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerID="282ff60d803aa88a8539d4e3ab34dca8484ab03c820859c219074381e7a51bc9" exitCode=0 Nov 23 15:09:56 crc kubenswrapper[4718]: I1123 15:09:56.657721 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn78" event={"ID":"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd","Type":"ContainerDied","Data":"282ff60d803aa88a8539d4e3ab34dca8484ab03c820859c219074381e7a51bc9"} Nov 23 15:09:56 crc kubenswrapper[4718]: I1123 15:09:56.730935 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:57 crc kubenswrapper[4718]: I1123 15:09:57.631788 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hb44c"] Nov 23 15:09:57 crc kubenswrapper[4718]: I1123 15:09:57.670154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn78" event={"ID":"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd","Type":"ContainerStarted","Data":"6eb42decefa9b1de589112a972d42b7f16fe2de0025c43eb1c5147b69fed776c"} Nov 23 15:09:57 crc kubenswrapper[4718]: I1123 15:09:57.695462 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mrn78" podStartSLOduration=2.22081254 podStartE2EDuration="6.695408909s" podCreationTimestamp="2025-11-23 15:09:51 +0000 UTC" firstStartedPulling="2025-11-23 15:09:52.616809906 +0000 UTC m=+1443.856429750" lastFinishedPulling="2025-11-23 15:09:57.091406275 +0000 UTC m=+1448.331026119" observedRunningTime="2025-11-23 15:09:57.688124285 +0000 UTC m=+1448.927744139" watchObservedRunningTime="2025-11-23 15:09:57.695408909 +0000 UTC m=+1448.935028763" Nov 23 15:09:58 crc kubenswrapper[4718]: I1123 15:09:58.688029 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hb44c" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="registry-server" containerID="cri-o://4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f" gracePeriod=2 Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.137085 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.318934 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-utilities\") pod \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.319198 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7smb\" (UniqueName: \"kubernetes.io/projected/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-kube-api-access-q7smb\") pod \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.319236 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-catalog-content\") pod \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\" (UID: \"b496519f-c8be-45cb-ae47-ea6f4bf15e1a\") " Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.320071 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-utilities" (OuterVolumeSpecName: "utilities") pod "b496519f-c8be-45cb-ae47-ea6f4bf15e1a" (UID: "b496519f-c8be-45cb-ae47-ea6f4bf15e1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.330719 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-kube-api-access-q7smb" (OuterVolumeSpecName: "kube-api-access-q7smb") pod "b496519f-c8be-45cb-ae47-ea6f4bf15e1a" (UID: "b496519f-c8be-45cb-ae47-ea6f4bf15e1a"). InnerVolumeSpecName "kube-api-access-q7smb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.375595 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b496519f-c8be-45cb-ae47-ea6f4bf15e1a" (UID: "b496519f-c8be-45cb-ae47-ea6f4bf15e1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.422188 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7smb\" (UniqueName: \"kubernetes.io/projected/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-kube-api-access-q7smb\") on node \"crc\" DevicePath \"\"" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.422233 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.422246 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b496519f-c8be-45cb-ae47-ea6f4bf15e1a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.708234 4718 generic.go:334] "Generic (PLEG): container finished" podID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerID="4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f" exitCode=0 Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.708276 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hb44c" event={"ID":"b496519f-c8be-45cb-ae47-ea6f4bf15e1a","Type":"ContainerDied","Data":"4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f"} Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.708302 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hb44c" event={"ID":"b496519f-c8be-45cb-ae47-ea6f4bf15e1a","Type":"ContainerDied","Data":"6e2e278dd8d9cc261b5a0f5c57fd5bbed003028e1d8d077c025aec8099180f47"} Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.708318 4718 scope.go:117] "RemoveContainer" containerID="4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.708469 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hb44c" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.744412 4718 scope.go:117] "RemoveContainer" containerID="cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.744543 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hb44c"] Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.753409 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hb44c"] Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.766504 4718 scope.go:117] "RemoveContainer" containerID="3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.830280 4718 scope.go:117] "RemoveContainer" containerID="4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f" Nov 23 15:09:59 crc kubenswrapper[4718]: E1123 15:09:59.830863 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f\": container with ID starting with 4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f not found: ID does not exist" containerID="4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.830919 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f"} err="failed to get container status \"4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f\": rpc error: code = NotFound desc = could not find container \"4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f\": container with ID starting with 4cbb6b1bec0d7d746f94a3029279908f946e83aad03208b4608713cbd482477f not found: ID does not exist" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.830955 4718 scope.go:117] "RemoveContainer" containerID="cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719" Nov 23 15:09:59 crc kubenswrapper[4718]: E1123 15:09:59.831540 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719\": container with ID starting with cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719 not found: ID does not exist" containerID="cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.831572 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719"} err="failed to get container status \"cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719\": rpc error: code = NotFound desc = could not find container \"cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719\": container with ID starting with cb30eef3f9c0c81e419945f195681dc8e87f2bca6d72d9932b8f8409302fc719 not found: ID does not exist" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.831595 4718 scope.go:117] "RemoveContainer" containerID="3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a" Nov 23 15:09:59 crc kubenswrapper[4718]: E1123 15:09:59.831964 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a\": container with ID starting with 3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a not found: ID does not exist" containerID="3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a" Nov 23 15:09:59 crc kubenswrapper[4718]: I1123 15:09:59.832015 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a"} err="failed to get container status \"3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a\": rpc error: code = NotFound desc = could not find container \"3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a\": container with ID starting with 3fbdd09f82a40b0f296aa67b4b95bb6309102b2b1904eec387305cc84c30790a not found: ID does not exist" Nov 23 15:10:00 crc kubenswrapper[4718]: I1123 15:10:00.451529 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" path="/var/lib/kubelet/pods/b496519f-c8be-45cb-ae47-ea6f4bf15e1a/volumes" Nov 23 15:10:01 crc kubenswrapper[4718]: I1123 15:10:01.396174 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:10:01 crc kubenswrapper[4718]: I1123 15:10:01.396511 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:10:02 crc kubenswrapper[4718]: I1123 15:10:02.447474 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrn78" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="registry-server" probeResult="failure" output=< Nov 23 15:10:02 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Nov 23 15:10:02 crc kubenswrapper[4718]: > Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.146040 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s646w"] Nov 23 15:10:09 crc kubenswrapper[4718]: E1123 15:10:09.147223 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="extract-utilities" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.147239 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="extract-utilities" Nov 23 15:10:09 crc kubenswrapper[4718]: E1123 15:10:09.147257 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="extract-content" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.147265 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="extract-content" Nov 23 15:10:09 crc kubenswrapper[4718]: E1123 15:10:09.147275 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="registry-server" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.147283 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="registry-server" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.147528 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b496519f-c8be-45cb-ae47-ea6f4bf15e1a" containerName="registry-server" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.149222 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.163150 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s646w"] Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.212620 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx22s\" (UniqueName: \"kubernetes.io/projected/37afe694-1154-42b6-8a71-524a4ef54d61-kube-api-access-gx22s\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.212834 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-utilities\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.212927 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-catalog-content\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.315502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx22s\" (UniqueName: \"kubernetes.io/projected/37afe694-1154-42b6-8a71-524a4ef54d61-kube-api-access-gx22s\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.315621 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-utilities\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.315664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-catalog-content\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.316219 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-catalog-content\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.316275 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-utilities\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.342937 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx22s\" (UniqueName: \"kubernetes.io/projected/37afe694-1154-42b6-8a71-524a4ef54d61-kube-api-access-gx22s\") pod \"community-operators-s646w\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.494560 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.814227 4718 generic.go:334] "Generic (PLEG): container finished" podID="ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" containerID="bdb981d88e2c73941f87ffff8865161e1a203efd9fc5372f4abec5f449d2f230" exitCode=0 Nov 23 15:10:09 crc kubenswrapper[4718]: I1123 15:10:09.814315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" event={"ID":"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c","Type":"ContainerDied","Data":"bdb981d88e2c73941f87ffff8865161e1a203efd9fc5372f4abec5f449d2f230"} Nov 23 15:10:10 crc kubenswrapper[4718]: I1123 15:10:10.029882 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s646w"] Nov 23 15:10:10 crc kubenswrapper[4718]: W1123 15:10:10.033268 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37afe694_1154_42b6_8a71_524a4ef54d61.slice/crio-b125db8ef504ee8db142080b278fcd952f05dbc3597c121e9619f053840cba7d WatchSource:0}: Error finding container b125db8ef504ee8db142080b278fcd952f05dbc3597c121e9619f053840cba7d: Status 404 returned error can't find the container with id b125db8ef504ee8db142080b278fcd952f05dbc3597c121e9619f053840cba7d Nov 23 15:10:10 crc kubenswrapper[4718]: I1123 15:10:10.827934 4718 generic.go:334] "Generic (PLEG): container finished" podID="37afe694-1154-42b6-8a71-524a4ef54d61" containerID="0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e" exitCode=0 Nov 23 15:10:10 crc kubenswrapper[4718]: I1123 15:10:10.828020 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s646w" event={"ID":"37afe694-1154-42b6-8a71-524a4ef54d61","Type":"ContainerDied","Data":"0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e"} Nov 23 15:10:10 crc kubenswrapper[4718]: I1123 15:10:10.828325 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s646w" event={"ID":"37afe694-1154-42b6-8a71-524a4ef54d61","Type":"ContainerStarted","Data":"b125db8ef504ee8db142080b278fcd952f05dbc3597c121e9619f053840cba7d"} Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.285728 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.352797 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-bootstrap-combined-ca-bundle\") pod \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.353045 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-ssh-key\") pod \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.353135 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9fc2\" (UniqueName: \"kubernetes.io/projected/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-kube-api-access-g9fc2\") pod \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.353176 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-inventory\") pod \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\" (UID: \"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c\") " Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.358947 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" (UID: "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.361511 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-kube-api-access-g9fc2" (OuterVolumeSpecName: "kube-api-access-g9fc2") pod "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" (UID: "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c"). InnerVolumeSpecName "kube-api-access-g9fc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.380409 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" (UID: "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.415643 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-inventory" (OuterVolumeSpecName: "inventory") pod "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" (UID: "ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.455120 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.455165 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9fc2\" (UniqueName: \"kubernetes.io/projected/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-kube-api-access-g9fc2\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.455182 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.455202 4718 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.459766 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.510240 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.842733 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" event={"ID":"ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c","Type":"ContainerDied","Data":"598646cba9164cc009fcba12b99617f58a83f247ccf5afb80a82c6e32348e920"} Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.842774 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="598646cba9164cc009fcba12b99617f58a83f247ccf5afb80a82c6e32348e920" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.842797 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.846992 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s646w" event={"ID":"37afe694-1154-42b6-8a71-524a4ef54d61","Type":"ContainerStarted","Data":"15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66"} Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.998947 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb"] Nov 23 15:10:11 crc kubenswrapper[4718]: E1123 15:10:11.999337 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.999354 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 15:10:11 crc kubenswrapper[4718]: I1123 15:10:11.999554 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.000192 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.002548 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.003909 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.004090 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.004246 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.014057 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb"] Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.065913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.065967 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdtv\" (UniqueName: \"kubernetes.io/projected/253a44f7-f768-49ca-88c1-de87b9cbcbbb-kube-api-access-dvdtv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.066053 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.168293 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.168530 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.168584 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdtv\" (UniqueName: \"kubernetes.io/projected/253a44f7-f768-49ca-88c1-de87b9cbcbbb-kube-api-access-dvdtv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.172175 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.172217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.185343 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdtv\" (UniqueName: \"kubernetes.io/projected/253a44f7-f768-49ca-88c1-de87b9cbcbbb-kube-api-access-dvdtv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84twb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.315286 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.858293 4718 generic.go:334] "Generic (PLEG): container finished" podID="37afe694-1154-42b6-8a71-524a4ef54d61" containerID="15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66" exitCode=0 Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.858518 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s646w" event={"ID":"37afe694-1154-42b6-8a71-524a4ef54d61","Type":"ContainerDied","Data":"15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66"} Nov 23 15:10:12 crc kubenswrapper[4718]: I1123 15:10:12.863839 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb"] Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.716524 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrn78"] Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.717011 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mrn78" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="registry-server" containerID="cri-o://6eb42decefa9b1de589112a972d42b7f16fe2de0025c43eb1c5147b69fed776c" gracePeriod=2 Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.870251 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s646w" event={"ID":"37afe694-1154-42b6-8a71-524a4ef54d61","Type":"ContainerStarted","Data":"cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906"} Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.876374 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerID="6eb42decefa9b1de589112a972d42b7f16fe2de0025c43eb1c5147b69fed776c" exitCode=0 Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.876508 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn78" event={"ID":"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd","Type":"ContainerDied","Data":"6eb42decefa9b1de589112a972d42b7f16fe2de0025c43eb1c5147b69fed776c"} Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.878697 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" event={"ID":"253a44f7-f768-49ca-88c1-de87b9cbcbbb","Type":"ContainerStarted","Data":"6a9ad3eb6eaf99f6ab6f9ee3b6e06ee162b914f63330c1a74408ca3335fa59d2"} Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.878731 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" event={"ID":"253a44f7-f768-49ca-88c1-de87b9cbcbbb","Type":"ContainerStarted","Data":"bebc784d3e715108ccaaed8fe610a186a7e58405655404b8cc6d96f8f8207b11"} Nov 23 15:10:13 crc kubenswrapper[4718]: I1123 15:10:13.902632 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s646w" podStartSLOduration=2.450439949 podStartE2EDuration="4.902606669s" podCreationTimestamp="2025-11-23 15:10:09 +0000 UTC" firstStartedPulling="2025-11-23 15:10:10.832041312 +0000 UTC m=+1462.071661166" lastFinishedPulling="2025-11-23 15:10:13.284208032 +0000 UTC m=+1464.523827886" observedRunningTime="2025-11-23 15:10:13.893078184 +0000 UTC m=+1465.132698028" watchObservedRunningTime="2025-11-23 15:10:13.902606669 +0000 UTC m=+1465.142226523" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.199043 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.204808 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt6jv\" (UniqueName: \"kubernetes.io/projected/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-kube-api-access-qt6jv\") pod \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.204989 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-utilities\") pod \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.206319 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-catalog-content\") pod \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\" (UID: \"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd\") " Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.206225 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-utilities" (OuterVolumeSpecName: "utilities") pod "8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" (UID: "8ffd0dd1-a7e7-4a77-ac68-0b98259901dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.206874 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.210610 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-kube-api-access-qt6jv" (OuterVolumeSpecName: "kube-api-access-qt6jv") pod "8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" (UID: "8ffd0dd1-a7e7-4a77-ac68-0b98259901dd"). InnerVolumeSpecName "kube-api-access-qt6jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.219455 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" podStartSLOduration=2.735898385 podStartE2EDuration="3.219414801s" podCreationTimestamp="2025-11-23 15:10:11 +0000 UTC" firstStartedPulling="2025-11-23 15:10:12.868219329 +0000 UTC m=+1464.107839173" lastFinishedPulling="2025-11-23 15:10:13.351735745 +0000 UTC m=+1464.591355589" observedRunningTime="2025-11-23 15:10:13.914814475 +0000 UTC m=+1465.154434319" watchObservedRunningTime="2025-11-23 15:10:14.219414801 +0000 UTC m=+1465.459034645" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.298680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" (UID: "8ffd0dd1-a7e7-4a77-ac68-0b98259901dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.309305 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt6jv\" (UniqueName: \"kubernetes.io/projected/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-kube-api-access-qt6jv\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.309355 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.893588 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn78" event={"ID":"8ffd0dd1-a7e7-4a77-ac68-0b98259901dd","Type":"ContainerDied","Data":"96ab1920dbbb3fc611c0bfefc2ba00a2c9ea58b70feb55af946c18db9b8bd085"} Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.893630 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn78" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.893941 4718 scope.go:117] "RemoveContainer" containerID="6eb42decefa9b1de589112a972d42b7f16fe2de0025c43eb1c5147b69fed776c" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.920597 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrn78"] Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.930736 4718 scope.go:117] "RemoveContainer" containerID="282ff60d803aa88a8539d4e3ab34dca8484ab03c820859c219074381e7a51bc9" Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.930914 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mrn78"] Nov 23 15:10:14 crc kubenswrapper[4718]: I1123 15:10:14.954673 4718 scope.go:117] "RemoveContainer" containerID="2db80d08be4ed60c7ad58302e7ac05f86b7ca4a98e8c167380d1b8ef50e55cd4" Nov 23 15:10:16 crc kubenswrapper[4718]: I1123 15:10:16.453725 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" path="/var/lib/kubelet/pods/8ffd0dd1-a7e7-4a77-ac68-0b98259901dd/volumes" Nov 23 15:10:19 crc kubenswrapper[4718]: I1123 15:10:19.494681 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:19 crc kubenswrapper[4718]: I1123 15:10:19.494959 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:19 crc kubenswrapper[4718]: I1123 15:10:19.565937 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:20 crc kubenswrapper[4718]: I1123 15:10:20.009847 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:20 crc kubenswrapper[4718]: I1123 15:10:20.731211 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s646w"] Nov 23 15:10:21 crc kubenswrapper[4718]: I1123 15:10:21.968855 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s646w" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="registry-server" containerID="cri-o://cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906" gracePeriod=2 Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.453514 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.467498 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-utilities\") pod \"37afe694-1154-42b6-8a71-524a4ef54d61\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.468906 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx22s\" (UniqueName: \"kubernetes.io/projected/37afe694-1154-42b6-8a71-524a4ef54d61-kube-api-access-gx22s\") pod \"37afe694-1154-42b6-8a71-524a4ef54d61\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.469022 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-catalog-content\") pod \"37afe694-1154-42b6-8a71-524a4ef54d61\" (UID: \"37afe694-1154-42b6-8a71-524a4ef54d61\") " Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.468433 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-utilities" (OuterVolumeSpecName: "utilities") pod "37afe694-1154-42b6-8a71-524a4ef54d61" (UID: "37afe694-1154-42b6-8a71-524a4ef54d61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.475198 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37afe694-1154-42b6-8a71-524a4ef54d61-kube-api-access-gx22s" (OuterVolumeSpecName: "kube-api-access-gx22s") pod "37afe694-1154-42b6-8a71-524a4ef54d61" (UID: "37afe694-1154-42b6-8a71-524a4ef54d61"). InnerVolumeSpecName "kube-api-access-gx22s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.573143 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.573531 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx22s\" (UniqueName: \"kubernetes.io/projected/37afe694-1154-42b6-8a71-524a4ef54d61-kube-api-access-gx22s\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.989303 4718 generic.go:334] "Generic (PLEG): container finished" podID="37afe694-1154-42b6-8a71-524a4ef54d61" containerID="cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906" exitCode=0 Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.989345 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s646w" event={"ID":"37afe694-1154-42b6-8a71-524a4ef54d61","Type":"ContainerDied","Data":"cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906"} Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.989372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s646w" event={"ID":"37afe694-1154-42b6-8a71-524a4ef54d61","Type":"ContainerDied","Data":"b125db8ef504ee8db142080b278fcd952f05dbc3597c121e9619f053840cba7d"} Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.989377 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s646w" Nov 23 15:10:22 crc kubenswrapper[4718]: I1123 15:10:22.989390 4718 scope.go:117] "RemoveContainer" containerID="cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.017623 4718 scope.go:117] "RemoveContainer" containerID="15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.037389 4718 scope.go:117] "RemoveContainer" containerID="0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.053567 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.053638 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.077166 4718 scope.go:117] "RemoveContainer" containerID="cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906" Nov 23 15:10:23 crc kubenswrapper[4718]: E1123 15:10:23.077701 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906\": container with ID starting with cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906 not found: ID does not exist" containerID="cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.077802 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906"} err="failed to get container status \"cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906\": rpc error: code = NotFound desc = could not find container \"cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906\": container with ID starting with cb4cca887c9f9470eff672f872a50be620af9b060428d1e9a3bb7fed297b6906 not found: ID does not exist" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.077865 4718 scope.go:117] "RemoveContainer" containerID="15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66" Nov 23 15:10:23 crc kubenswrapper[4718]: E1123 15:10:23.078456 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66\": container with ID starting with 15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66 not found: ID does not exist" containerID="15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.078497 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66"} err="failed to get container status \"15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66\": rpc error: code = NotFound desc = could not find container \"15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66\": container with ID starting with 15ea1c661e3ee829fe72533d024694a4cc981209352059101e6dff921fea5b66 not found: ID does not exist" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.078528 4718 scope.go:117] "RemoveContainer" containerID="0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e" Nov 23 15:10:23 crc kubenswrapper[4718]: E1123 15:10:23.078856 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e\": container with ID starting with 0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e not found: ID does not exist" containerID="0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.078899 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e"} err="failed to get container status \"0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e\": rpc error: code = NotFound desc = could not find container \"0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e\": container with ID starting with 0048a19752de2c3a83374fff05802bfa2fec80a954086fecac95f874caf0801e not found: ID does not exist" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.408599 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37afe694-1154-42b6-8a71-524a4ef54d61" (UID: "37afe694-1154-42b6-8a71-524a4ef54d61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.494059 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37afe694-1154-42b6-8a71-524a4ef54d61-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.621917 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s646w"] Nov 23 15:10:23 crc kubenswrapper[4718]: I1123 15:10:23.639327 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s646w"] Nov 23 15:10:24 crc kubenswrapper[4718]: I1123 15:10:24.456092 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" path="/var/lib/kubelet/pods/37afe694-1154-42b6-8a71-524a4ef54d61/volumes" Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.053999 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.054791 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.054856 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.055862 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.055951 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" gracePeriod=600 Nov 23 15:10:53 crc kubenswrapper[4718]: E1123 15:10:53.180375 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.290360 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" exitCode=0 Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.290426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c"} Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.290502 4718 scope.go:117] "RemoveContainer" containerID="1501bec9fe9ec98aacf1e278e6a359530da30674903e2cad276cc832e866bc17" Nov 23 15:10:53 crc kubenswrapper[4718]: I1123 15:10:53.291150 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:10:53 crc kubenswrapper[4718]: E1123 15:10:53.291506 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.044129 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-24bf-account-create-h2qc7"] Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.053657 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9b42d"] Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.061380 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-144d-account-create-7wszp"] Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.069670 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-144d-account-create-7wszp"] Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.077297 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-24bf-account-create-h2qc7"] Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.084654 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9b42d"] Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.456007 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a426f26e-d114-4570-81da-3be0c4aca095" path="/var/lib/kubelet/pods/a426f26e-d114-4570-81da-3be0c4aca095/volumes" Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.456946 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8693ac-cbb7-4468-8600-32ef277c0db1" path="/var/lib/kubelet/pods/bc8693ac-cbb7-4468-8600-32ef277c0db1/volumes" Nov 23 15:11:04 crc kubenswrapper[4718]: I1123 15:11:04.457759 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4073833-fe54-4322-9dd2-63d98b5f9788" path="/var/lib/kubelet/pods/f4073833-fe54-4322-9dd2-63d98b5f9788/volumes" Nov 23 15:11:06 crc kubenswrapper[4718]: I1123 15:11:06.441178 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:11:06 crc kubenswrapper[4718]: E1123 15:11:06.443184 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:11:09 crc kubenswrapper[4718]: I1123 15:11:09.025190 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rbg79"] Nov 23 15:11:09 crc kubenswrapper[4718]: I1123 15:11:09.034667 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rbg79"] Nov 23 15:11:10 crc kubenswrapper[4718]: I1123 15:11:10.030772 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7fcb-account-create-w9xds"] Nov 23 15:11:10 crc kubenswrapper[4718]: I1123 15:11:10.041869 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7fcb-account-create-w9xds"] Nov 23 15:11:10 crc kubenswrapper[4718]: I1123 15:11:10.461863 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b125dda-4aa9-4a44-a714-f553fa853648" path="/var/lib/kubelet/pods/5b125dda-4aa9-4a44-a714-f553fa853648/volumes" Nov 23 15:11:10 crc kubenswrapper[4718]: I1123 15:11:10.462685 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfa5922-5bb7-4b2c-a5ca-7349e85289a9" path="/var/lib/kubelet/pods/9bfa5922-5bb7-4b2c-a5ca-7349e85289a9/volumes" Nov 23 15:11:14 crc kubenswrapper[4718]: I1123 15:11:14.035667 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8qshv"] Nov 23 15:11:14 crc kubenswrapper[4718]: I1123 15:11:14.044907 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8qshv"] Nov 23 15:11:14 crc kubenswrapper[4718]: I1123 15:11:14.458047 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54124fda-4239-42e9-b86f-2cfa15af47f0" path="/var/lib/kubelet/pods/54124fda-4239-42e9-b86f-2cfa15af47f0/volumes" Nov 23 15:11:20 crc kubenswrapper[4718]: I1123 15:11:20.449607 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:11:20 crc kubenswrapper[4718]: E1123 15:11:20.450554 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.052124 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e4cc-account-create-wbqq7"] Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.065053 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c821-account-create-hkfkc"] Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.074234 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zjsxx"] Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.081754 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e4cc-account-create-wbqq7"] Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.088805 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zjsxx"] Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.097213 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c821-account-create-hkfkc"] Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.452862 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7718d50a-4a43-4c73-90ab-b7019cc29fb3" path="/var/lib/kubelet/pods/7718d50a-4a43-4c73-90ab-b7019cc29fb3/volumes" Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.453495 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc37da05-93d6-403b-b320-c2445c7880cd" path="/var/lib/kubelet/pods/cc37da05-93d6-403b-b320-c2445c7880cd/volumes" Nov 23 15:11:28 crc kubenswrapper[4718]: I1123 15:11:28.454039 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64934b7-7c67-49de-9f77-ac5c6873a04b" path="/var/lib/kubelet/pods/f64934b7-7c67-49de-9f77-ac5c6873a04b/volumes" Nov 23 15:11:31 crc kubenswrapper[4718]: I1123 15:11:31.027428 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c053-account-create-pnjvh"] Nov 23 15:11:31 crc kubenswrapper[4718]: I1123 15:11:31.036803 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-wwnbd"] Nov 23 15:11:31 crc kubenswrapper[4718]: I1123 15:11:31.046089 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-656vg"] Nov 23 15:11:31 crc kubenswrapper[4718]: I1123 15:11:31.053053 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c053-account-create-pnjvh"] Nov 23 15:11:31 crc kubenswrapper[4718]: I1123 15:11:31.059582 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-wwnbd"] Nov 23 15:11:31 crc kubenswrapper[4718]: I1123 15:11:31.065871 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-656vg"] Nov 23 15:11:32 crc kubenswrapper[4718]: I1123 15:11:32.455913 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719445f9-11cc-4f25-bf8d-bdcb03e1b443" path="/var/lib/kubelet/pods/719445f9-11cc-4f25-bf8d-bdcb03e1b443/volumes" Nov 23 15:11:32 crc kubenswrapper[4718]: I1123 15:11:32.456862 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b92ece-807b-4c8b-9c77-906c5e70804c" path="/var/lib/kubelet/pods/a4b92ece-807b-4c8b-9c77-906c5e70804c/volumes" Nov 23 15:11:32 crc kubenswrapper[4718]: I1123 15:11:32.457469 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a582e7aa-1025-4a28-9dc8-18fbb5d1d857" path="/var/lib/kubelet/pods/a582e7aa-1025-4a28-9dc8-18fbb5d1d857/volumes" Nov 23 15:11:33 crc kubenswrapper[4718]: I1123 15:11:33.442051 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:11:33 crc kubenswrapper[4718]: E1123 15:11:33.442324 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:11:42 crc kubenswrapper[4718]: I1123 15:11:42.731595 4718 scope.go:117] "RemoveContainer" containerID="765a9eb9db5def8300090a514d22a6b5489f8dddefa5ba36a31e0fc9320106b3" Nov 23 15:11:42 crc kubenswrapper[4718]: I1123 15:11:42.809512 4718 scope.go:117] "RemoveContainer" containerID="6ac5ec00a43f3ddbbe1b3ddb068af81fe9dd29ee5b4516aadb9b269da681d291" Nov 23 15:11:42 crc kubenswrapper[4718]: I1123 15:11:42.837130 4718 scope.go:117] "RemoveContainer" containerID="02ef4ad92caf0fc426522862706826d2be6222f8dbce6c94c9b6c25c43e775e7" Nov 23 15:11:42 crc kubenswrapper[4718]: I1123 15:11:42.888140 4718 scope.go:117] "RemoveContainer" containerID="5a93acbcba642355db3ab8b2c13bc22652ed53e3d87b23d524f8aa13984836b8" Nov 23 15:11:42 crc kubenswrapper[4718]: I1123 15:11:42.945471 4718 scope.go:117] "RemoveContainer" containerID="180c6340ecdcd4d2712957cc955e852c2bdc244eb5f606c103eed1fe4d7e0044" Nov 23 15:11:42 crc kubenswrapper[4718]: I1123 15:11:42.988659 4718 scope.go:117] "RemoveContainer" containerID="ba52de14a3c4fc9fae39b20b264b56c24f4a231ef014b4ac53d915c9ee32f870" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.027171 4718 scope.go:117] "RemoveContainer" containerID="e96928f4ed914a5bfdfd9976e1731c2784f38d27a9de25ebaaeecfa9fdb253a6" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.050282 4718 scope.go:117] "RemoveContainer" containerID="72a6c3e7829fea07808719d8fc1f99e25ddad095b707bcb17dbcea988a53d3a6" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.071785 4718 scope.go:117] "RemoveContainer" containerID="f1c4531eabee56929904e35aa51e956c96ed3d32c9203f598b06e943c4093bdd" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.107684 4718 scope.go:117] "RemoveContainer" containerID="99d685242e28f1b9c704b5364be1c9f9dae2fa6b75d9a23e1e3db3290ee373b6" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.132467 4718 scope.go:117] "RemoveContainer" containerID="13c73a4e15a892fee455490e64390494ebd1e32453b7c1bd2b57a206a66e8d87" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.159310 4718 scope.go:117] "RemoveContainer" containerID="ae0ea662ea0dc1808e8aa0fec73cd76701c79218067748699bfde5524cdbd907" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.195360 4718 scope.go:117] "RemoveContainer" containerID="039160df563beaf094627e6deadf18d2f33f46a614921a86711a30592765f3f3" Nov 23 15:11:43 crc kubenswrapper[4718]: I1123 15:11:43.235312 4718 scope.go:117] "RemoveContainer" containerID="5c87f207db05d688db35a5f691bc2a6d98780957b66a0b1a80a2535fbdad52eb" Nov 23 15:11:44 crc kubenswrapper[4718]: I1123 15:11:44.053062 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9lfvb"] Nov 23 15:11:44 crc kubenswrapper[4718]: I1123 15:11:44.069680 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9lfvb"] Nov 23 15:11:44 crc kubenswrapper[4718]: I1123 15:11:44.452153 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60bc235d-395c-4e4b-be2a-26e2f7e7bf78" path="/var/lib/kubelet/pods/60bc235d-395c-4e4b-be2a-26e2f7e7bf78/volumes" Nov 23 15:11:48 crc kubenswrapper[4718]: I1123 15:11:48.441818 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:11:48 crc kubenswrapper[4718]: E1123 15:11:48.442924 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:11:53 crc kubenswrapper[4718]: I1123 15:11:53.926068 4718 generic.go:334] "Generic (PLEG): container finished" podID="253a44f7-f768-49ca-88c1-de87b9cbcbbb" containerID="6a9ad3eb6eaf99f6ab6f9ee3b6e06ee162b914f63330c1a74408ca3335fa59d2" exitCode=0 Nov 23 15:11:53 crc kubenswrapper[4718]: I1123 15:11:53.926202 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" event={"ID":"253a44f7-f768-49ca-88c1-de87b9cbcbbb","Type":"ContainerDied","Data":"6a9ad3eb6eaf99f6ab6f9ee3b6e06ee162b914f63330c1a74408ca3335fa59d2"} Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.048470 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-sz47m"] Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.055970 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-sz47m"] Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.385085 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.441937 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvdtv\" (UniqueName: \"kubernetes.io/projected/253a44f7-f768-49ca-88c1-de87b9cbcbbb-kube-api-access-dvdtv\") pod \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.441979 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-ssh-key\") pod \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.442164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-inventory\") pod \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\" (UID: \"253a44f7-f768-49ca-88c1-de87b9cbcbbb\") " Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.450263 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253a44f7-f768-49ca-88c1-de87b9cbcbbb-kube-api-access-dvdtv" (OuterVolumeSpecName: "kube-api-access-dvdtv") pod "253a44f7-f768-49ca-88c1-de87b9cbcbbb" (UID: "253a44f7-f768-49ca-88c1-de87b9cbcbbb"). InnerVolumeSpecName "kube-api-access-dvdtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.467995 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-inventory" (OuterVolumeSpecName: "inventory") pod "253a44f7-f768-49ca-88c1-de87b9cbcbbb" (UID: "253a44f7-f768-49ca-88c1-de87b9cbcbbb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.487248 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "253a44f7-f768-49ca-88c1-de87b9cbcbbb" (UID: "253a44f7-f768-49ca-88c1-de87b9cbcbbb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.543961 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.543995 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvdtv\" (UniqueName: \"kubernetes.io/projected/253a44f7-f768-49ca-88c1-de87b9cbcbbb-kube-api-access-dvdtv\") on node \"crc\" DevicePath \"\"" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.544005 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/253a44f7-f768-49ca-88c1-de87b9cbcbbb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.948278 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" event={"ID":"253a44f7-f768-49ca-88c1-de87b9cbcbbb","Type":"ContainerDied","Data":"bebc784d3e715108ccaaed8fe610a186a7e58405655404b8cc6d96f8f8207b11"} Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.948685 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bebc784d3e715108ccaaed8fe610a186a7e58405655404b8cc6d96f8f8207b11" Nov 23 15:11:55 crc kubenswrapper[4718]: I1123 15:11:55.948692 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84twb" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.038125 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b"] Nov 23 15:11:56 crc kubenswrapper[4718]: E1123 15:11:56.038835 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="extract-content" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.038933 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="extract-content" Nov 23 15:11:56 crc kubenswrapper[4718]: E1123 15:11:56.039002 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="extract-utilities" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.039090 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="extract-utilities" Nov 23 15:11:56 crc kubenswrapper[4718]: E1123 15:11:56.039161 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253a44f7-f768-49ca-88c1-de87b9cbcbbb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.039251 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="253a44f7-f768-49ca-88c1-de87b9cbcbbb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 15:11:56 crc kubenswrapper[4718]: E1123 15:11:56.039357 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="extract-content" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.039418 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="extract-content" Nov 23 15:11:56 crc kubenswrapper[4718]: E1123 15:11:56.039497 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="registry-server" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.039563 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="registry-server" Nov 23 15:11:56 crc kubenswrapper[4718]: E1123 15:11:56.039702 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="extract-utilities" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.039766 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="extract-utilities" Nov 23 15:11:56 crc kubenswrapper[4718]: E1123 15:11:56.039838 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="registry-server" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.039899 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="registry-server" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.040245 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="253a44f7-f768-49ca-88c1-de87b9cbcbbb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.040336 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffd0dd1-a7e7-4a77-ac68-0b98259901dd" containerName="registry-server" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.040412 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="37afe694-1154-42b6-8a71-524a4ef54d61" containerName="registry-server" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.041273 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.043969 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.044341 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.044649 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.045237 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.045898 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b"] Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.155605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9cfj\" (UniqueName: \"kubernetes.io/projected/843c6972-d172-42d0-9c7c-bacd49de3307-kube-api-access-m9cfj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.155667 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.155697 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.257764 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9cfj\" (UniqueName: \"kubernetes.io/projected/843c6972-d172-42d0-9c7c-bacd49de3307-kube-api-access-m9cfj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.257824 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.257856 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.265201 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.271731 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.284021 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9cfj\" (UniqueName: \"kubernetes.io/projected/843c6972-d172-42d0-9c7c-bacd49de3307-kube-api-access-m9cfj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.372063 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.461292 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1935d5b8-9171-4708-9b50-4dbef79d106b" path="/var/lib/kubelet/pods/1935d5b8-9171-4708-9b50-4dbef79d106b/volumes" Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.922748 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b"] Nov 23 15:11:56 crc kubenswrapper[4718]: I1123 15:11:56.956524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" event={"ID":"843c6972-d172-42d0-9c7c-bacd49de3307","Type":"ContainerStarted","Data":"7d82c611f52dd69c4113c179fe3eae331d2608f9efe1005d1f5009a8ca266e1a"} Nov 23 15:11:57 crc kubenswrapper[4718]: I1123 15:11:57.969751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" event={"ID":"843c6972-d172-42d0-9c7c-bacd49de3307","Type":"ContainerStarted","Data":"fee8a2764c75ffcf9093aa3094044368234df7ef760cbafe08f3d681ae3293ae"} Nov 23 15:11:57 crc kubenswrapper[4718]: I1123 15:11:57.995516 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" podStartSLOduration=1.4974500370000001 podStartE2EDuration="1.995491288s" podCreationTimestamp="2025-11-23 15:11:56 +0000 UTC" firstStartedPulling="2025-11-23 15:11:56.915696233 +0000 UTC m=+1568.155316077" lastFinishedPulling="2025-11-23 15:11:57.413737484 +0000 UTC m=+1568.653357328" observedRunningTime="2025-11-23 15:11:57.989433287 +0000 UTC m=+1569.229053151" watchObservedRunningTime="2025-11-23 15:11:57.995491288 +0000 UTC m=+1569.235111172" Nov 23 15:12:01 crc kubenswrapper[4718]: I1123 15:12:01.441493 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:12:01 crc kubenswrapper[4718]: E1123 15:12:01.442283 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:12:14 crc kubenswrapper[4718]: I1123 15:12:14.441200 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:12:14 crc kubenswrapper[4718]: E1123 15:12:14.442480 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:12:25 crc kubenswrapper[4718]: I1123 15:12:25.440952 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:12:25 crc kubenswrapper[4718]: E1123 15:12:25.441827 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:12:39 crc kubenswrapper[4718]: I1123 15:12:39.046342 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j62px"] Nov 23 15:12:39 crc kubenswrapper[4718]: I1123 15:12:39.056033 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j62px"] Nov 23 15:12:39 crc kubenswrapper[4718]: I1123 15:12:39.443624 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:12:39 crc kubenswrapper[4718]: E1123 15:12:39.445248 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:12:40 crc kubenswrapper[4718]: I1123 15:12:40.457292 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef41e996-1748-4b67-ad0c-1b7481290391" path="/var/lib/kubelet/pods/ef41e996-1748-4b67-ad0c-1b7481290391/volumes" Nov 23 15:12:43 crc kubenswrapper[4718]: I1123 15:12:43.803616 4718 scope.go:117] "RemoveContainer" containerID="1cd1b3be77b6cf9f153c14c78030cccd0867fa5a7b0edcb5fc39c9551bd8fc78" Nov 23 15:12:43 crc kubenswrapper[4718]: I1123 15:12:43.876243 4718 scope.go:117] "RemoveContainer" containerID="9e6513797c7d6343ea7615bc111ffe98a5a5d8b3de0602f1552bcc1be5d7b63f" Nov 23 15:12:43 crc kubenswrapper[4718]: I1123 15:12:43.929805 4718 scope.go:117] "RemoveContainer" containerID="3d239e018f6426fde1107bd93f66bb487c96cb841c0024b1e617c5728cf62874" Nov 23 15:12:49 crc kubenswrapper[4718]: I1123 15:12:49.032177 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2m7cd"] Nov 23 15:12:49 crc kubenswrapper[4718]: I1123 15:12:49.048410 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2m7cd"] Nov 23 15:12:50 crc kubenswrapper[4718]: I1123 15:12:50.449370 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:12:50 crc kubenswrapper[4718]: E1123 15:12:50.449830 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:12:50 crc kubenswrapper[4718]: I1123 15:12:50.452119 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88793089-cfde-48d5-8670-880344ab6711" path="/var/lib/kubelet/pods/88793089-cfde-48d5-8670-880344ab6711/volumes" Nov 23 15:12:51 crc kubenswrapper[4718]: I1123 15:12:51.042053 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2rx99"] Nov 23 15:12:51 crc kubenswrapper[4718]: I1123 15:12:51.050145 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2rx99"] Nov 23 15:12:52 crc kubenswrapper[4718]: I1123 15:12:52.454210 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2456d901-f349-4f7a-b9cc-63c9eba428c8" path="/var/lib/kubelet/pods/2456d901-f349-4f7a-b9cc-63c9eba428c8/volumes" Nov 23 15:12:56 crc kubenswrapper[4718]: I1123 15:12:56.038184 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-cfbmg"] Nov 23 15:12:56 crc kubenswrapper[4718]: I1123 15:12:56.051579 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-cfbmg"] Nov 23 15:12:56 crc kubenswrapper[4718]: I1123 15:12:56.456135 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de4e428-ba8b-43d2-893d-e6f020997e5b" path="/var/lib/kubelet/pods/2de4e428-ba8b-43d2-893d-e6f020997e5b/volumes" Nov 23 15:12:59 crc kubenswrapper[4718]: I1123 15:12:59.037875 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bppgm"] Nov 23 15:12:59 crc kubenswrapper[4718]: I1123 15:12:59.053755 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bppgm"] Nov 23 15:13:00 crc kubenswrapper[4718]: I1123 15:13:00.458730 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58" path="/var/lib/kubelet/pods/4a3b7bc7-1950-41d8-998b-b8ca6a7f6e58/volumes" Nov 23 15:13:03 crc kubenswrapper[4718]: I1123 15:13:03.441850 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:13:03 crc kubenswrapper[4718]: E1123 15:13:03.442734 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:13:13 crc kubenswrapper[4718]: I1123 15:13:13.972650 4718 generic.go:334] "Generic (PLEG): container finished" podID="843c6972-d172-42d0-9c7c-bacd49de3307" containerID="fee8a2764c75ffcf9093aa3094044368234df7ef760cbafe08f3d681ae3293ae" exitCode=0 Nov 23 15:13:13 crc kubenswrapper[4718]: I1123 15:13:13.972716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" event={"ID":"843c6972-d172-42d0-9c7c-bacd49de3307","Type":"ContainerDied","Data":"fee8a2764c75ffcf9093aa3094044368234df7ef760cbafe08f3d681ae3293ae"} Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.365963 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.533587 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9cfj\" (UniqueName: \"kubernetes.io/projected/843c6972-d172-42d0-9c7c-bacd49de3307-kube-api-access-m9cfj\") pod \"843c6972-d172-42d0-9c7c-bacd49de3307\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.533647 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-ssh-key\") pod \"843c6972-d172-42d0-9c7c-bacd49de3307\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.533707 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-inventory\") pod \"843c6972-d172-42d0-9c7c-bacd49de3307\" (UID: \"843c6972-d172-42d0-9c7c-bacd49de3307\") " Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.543659 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843c6972-d172-42d0-9c7c-bacd49de3307-kube-api-access-m9cfj" (OuterVolumeSpecName: "kube-api-access-m9cfj") pod "843c6972-d172-42d0-9c7c-bacd49de3307" (UID: "843c6972-d172-42d0-9c7c-bacd49de3307"). InnerVolumeSpecName "kube-api-access-m9cfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.560942 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "843c6972-d172-42d0-9c7c-bacd49de3307" (UID: "843c6972-d172-42d0-9c7c-bacd49de3307"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.569898 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-inventory" (OuterVolumeSpecName: "inventory") pod "843c6972-d172-42d0-9c7c-bacd49de3307" (UID: "843c6972-d172-42d0-9c7c-bacd49de3307"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.636241 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9cfj\" (UniqueName: \"kubernetes.io/projected/843c6972-d172-42d0-9c7c-bacd49de3307-kube-api-access-m9cfj\") on node \"crc\" DevicePath \"\"" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.636296 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.636316 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/843c6972-d172-42d0-9c7c-bacd49de3307-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.992164 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" event={"ID":"843c6972-d172-42d0-9c7c-bacd49de3307","Type":"ContainerDied","Data":"7d82c611f52dd69c4113c179fe3eae331d2608f9efe1005d1f5009a8ca266e1a"} Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.992541 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d82c611f52dd69c4113c179fe3eae331d2608f9efe1005d1f5009a8ca266e1a" Nov 23 15:13:15 crc kubenswrapper[4718]: I1123 15:13:15.992195 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.097756 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m"] Nov 23 15:13:16 crc kubenswrapper[4718]: E1123 15:13:16.098229 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843c6972-d172-42d0-9c7c-bacd49de3307" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.098256 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="843c6972-d172-42d0-9c7c-bacd49de3307" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.098739 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="843c6972-d172-42d0-9c7c-bacd49de3307" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.100120 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.102531 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.103062 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.103491 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.104492 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.109334 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m"] Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.247130 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.247260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkf6s\" (UniqueName: \"kubernetes.io/projected/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-kube-api-access-gkf6s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.247327 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.349548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.349707 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkf6s\" (UniqueName: \"kubernetes.io/projected/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-kube-api-access-gkf6s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.349772 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.354621 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.360097 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.372399 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkf6s\" (UniqueName: \"kubernetes.io/projected/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-kube-api-access-gkf6s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.422150 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.441120 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:13:16 crc kubenswrapper[4718]: E1123 15:13:16.441768 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:13:16 crc kubenswrapper[4718]: I1123 15:13:16.944114 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m"] Nov 23 15:13:17 crc kubenswrapper[4718]: I1123 15:13:17.002747 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" event={"ID":"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f","Type":"ContainerStarted","Data":"16413f98eea1940c2afb44659f3b02e8e16f38b6b4885d06389260ef02d62af1"} Nov 23 15:13:18 crc kubenswrapper[4718]: I1123 15:13:18.014638 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" event={"ID":"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f","Type":"ContainerStarted","Data":"d6314dc4e46a5bc83f7b8dc8e3503a60e0dc92396a2cdd6253080a22d9e048ae"} Nov 23 15:13:18 crc kubenswrapper[4718]: I1123 15:13:18.077146 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" podStartSLOduration=1.570834932 podStartE2EDuration="2.077128171s" podCreationTimestamp="2025-11-23 15:13:16 +0000 UTC" firstStartedPulling="2025-11-23 15:13:16.950180479 +0000 UTC m=+1648.189800323" lastFinishedPulling="2025-11-23 15:13:17.456473718 +0000 UTC m=+1648.696093562" observedRunningTime="2025-11-23 15:13:18.069659434 +0000 UTC m=+1649.309279288" watchObservedRunningTime="2025-11-23 15:13:18.077128171 +0000 UTC m=+1649.316748015" Nov 23 15:13:23 crc kubenswrapper[4718]: I1123 15:13:23.058016 4718 generic.go:334] "Generic (PLEG): container finished" podID="a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f" containerID="d6314dc4e46a5bc83f7b8dc8e3503a60e0dc92396a2cdd6253080a22d9e048ae" exitCode=0 Nov 23 15:13:23 crc kubenswrapper[4718]: I1123 15:13:23.058103 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" event={"ID":"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f","Type":"ContainerDied","Data":"d6314dc4e46a5bc83f7b8dc8e3503a60e0dc92396a2cdd6253080a22d9e048ae"} Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.510903 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.622013 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkf6s\" (UniqueName: \"kubernetes.io/projected/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-kube-api-access-gkf6s\") pod \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.622141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-inventory\") pod \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.622288 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-ssh-key\") pod \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\" (UID: \"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f\") " Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.628162 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-kube-api-access-gkf6s" (OuterVolumeSpecName: "kube-api-access-gkf6s") pod "a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f" (UID: "a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f"). InnerVolumeSpecName "kube-api-access-gkf6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.663958 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f" (UID: "a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.668952 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-inventory" (OuterVolumeSpecName: "inventory") pod "a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f" (UID: "a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.726430 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.726484 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:13:24 crc kubenswrapper[4718]: I1123 15:13:24.726496 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkf6s\" (UniqueName: \"kubernetes.io/projected/a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f-kube-api-access-gkf6s\") on node \"crc\" DevicePath \"\"" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.081314 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" event={"ID":"a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f","Type":"ContainerDied","Data":"16413f98eea1940c2afb44659f3b02e8e16f38b6b4885d06389260ef02d62af1"} Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.081376 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16413f98eea1940c2afb44659f3b02e8e16f38b6b4885d06389260ef02d62af1" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.081567 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.158853 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b"] Nov 23 15:13:25 crc kubenswrapper[4718]: E1123 15:13:25.159296 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.159316 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.159572 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.160331 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.162643 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.162855 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.162914 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.164383 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.172538 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b"] Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.235778 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slv8g\" (UniqueName: \"kubernetes.io/projected/6bc22e89-875e-4f3a-93b3-5e738e897c23-kube-api-access-slv8g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.236144 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.236258 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.339071 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.339175 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.339483 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slv8g\" (UniqueName: \"kubernetes.io/projected/6bc22e89-875e-4f3a-93b3-5e738e897c23-kube-api-access-slv8g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.344399 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.344614 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.359408 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slv8g\" (UniqueName: \"kubernetes.io/projected/6bc22e89-875e-4f3a-93b3-5e738e897c23-kube-api-access-slv8g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z276b\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:25 crc kubenswrapper[4718]: I1123 15:13:25.478936 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:13:26 crc kubenswrapper[4718]: W1123 15:13:26.102565 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bc22e89_875e_4f3a_93b3_5e738e897c23.slice/crio-04de1a3f477bcc82034fd6bdafb0f9fa6c3d50c75f0a33d4662a3eee7de159fa WatchSource:0}: Error finding container 04de1a3f477bcc82034fd6bdafb0f9fa6c3d50c75f0a33d4662a3eee7de159fa: Status 404 returned error can't find the container with id 04de1a3f477bcc82034fd6bdafb0f9fa6c3d50c75f0a33d4662a3eee7de159fa Nov 23 15:13:26 crc kubenswrapper[4718]: I1123 15:13:26.106632 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.059502 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-722a-account-create-z26wb"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.075940 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z9f2r"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.090511 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-txb4n"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.098602 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-722a-account-create-z26wb"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.100075 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" event={"ID":"6bc22e89-875e-4f3a-93b3-5e738e897c23","Type":"ContainerStarted","Data":"6d621fc6f55825c7631797166eb97720d3594745e6ac66ee76abbc0e7d033864"} Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.100129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" event={"ID":"6bc22e89-875e-4f3a-93b3-5e738e897c23","Type":"ContainerStarted","Data":"04de1a3f477bcc82034fd6bdafb0f9fa6c3d50c75f0a33d4662a3eee7de159fa"} Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.106518 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cae1-account-create-gmjg7"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.113887 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6b90-account-create-9llwr"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.120599 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dmjbz"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.127015 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z9f2r"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.132786 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6b90-account-create-9llwr"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.138567 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-txb4n"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.144001 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cae1-account-create-gmjg7"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.149536 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dmjbz"] Nov 23 15:13:27 crc kubenswrapper[4718]: I1123 15:13:27.153617 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" podStartSLOduration=1.676816334 podStartE2EDuration="2.153593831s" podCreationTimestamp="2025-11-23 15:13:25 +0000 UTC" firstStartedPulling="2025-11-23 15:13:26.105621739 +0000 UTC m=+1657.345241583" lastFinishedPulling="2025-11-23 15:13:26.582399226 +0000 UTC m=+1657.822019080" observedRunningTime="2025-11-23 15:13:27.121690008 +0000 UTC m=+1658.361309872" watchObservedRunningTime="2025-11-23 15:13:27.153593831 +0000 UTC m=+1658.393213665" Nov 23 15:13:28 crc kubenswrapper[4718]: I1123 15:13:28.441950 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:13:28 crc kubenswrapper[4718]: E1123 15:13:28.442582 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:13:28 crc kubenswrapper[4718]: I1123 15:13:28.454420 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09de4533-d2f0-42cc-8bc2-170efce7f2e7" path="/var/lib/kubelet/pods/09de4533-d2f0-42cc-8bc2-170efce7f2e7/volumes" Nov 23 15:13:28 crc kubenswrapper[4718]: I1123 15:13:28.455384 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2050dab9-a003-4e88-b06c-5fb9cadd5956" path="/var/lib/kubelet/pods/2050dab9-a003-4e88-b06c-5fb9cadd5956/volumes" Nov 23 15:13:28 crc kubenswrapper[4718]: I1123 15:13:28.456570 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a532b56-cb65-4f6c-bee8-a72cf66ead01" path="/var/lib/kubelet/pods/4a532b56-cb65-4f6c-bee8-a72cf66ead01/volumes" Nov 23 15:13:28 crc kubenswrapper[4718]: I1123 15:13:28.457514 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91200876-6356-412a-b33a-1fe4ccb7ac38" path="/var/lib/kubelet/pods/91200876-6356-412a-b33a-1fe4ccb7ac38/volumes" Nov 23 15:13:28 crc kubenswrapper[4718]: I1123 15:13:28.459212 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9475dfac-7aca-455a-ac54-158d673a4b6c" path="/var/lib/kubelet/pods/9475dfac-7aca-455a-ac54-158d673a4b6c/volumes" Nov 23 15:13:28 crc kubenswrapper[4718]: I1123 15:13:28.460542 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e6e2f92-5e58-4964-a7cf-e04f3c87b20f" path="/var/lib/kubelet/pods/9e6e2f92-5e58-4964-a7cf-e04f3c87b20f/volumes" Nov 23 15:13:41 crc kubenswrapper[4718]: I1123 15:13:41.440662 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:13:41 crc kubenswrapper[4718]: E1123 15:13:41.443622 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.049035 4718 scope.go:117] "RemoveContainer" containerID="8a49a7896dc8a92f8b9eebd99cf02527effd8db093472fdc6f4b6e03f796fc95" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.120832 4718 scope.go:117] "RemoveContainer" containerID="ecc8668f1c44c21f2579dc3d2c0c6c98e4b067637d0542bebce13521bd1fe8e1" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.187615 4718 scope.go:117] "RemoveContainer" containerID="780a5c0cc0a14b2dabe6f3418818247a9c2b067524618dccedb2afc7ea504c0a" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.227527 4718 scope.go:117] "RemoveContainer" containerID="bce4b8e30e462952bd2e068ce95bc2c5f7651ec34de3aa0a356265ee5af6574b" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.276178 4718 scope.go:117] "RemoveContainer" containerID="d096ba372fe741fa0427963212ec8a73880cf7758c556e9ab109f837a775f8a3" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.328542 4718 scope.go:117] "RemoveContainer" containerID="734ac2a3f45f3019f1e2c7519f8c5a54c9a4a53b7586e139d3953c703d011162" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.371668 4718 scope.go:117] "RemoveContainer" containerID="3f8a94a95d1222745e8bbeaaf74770c0ba13d44cdb450beaf9858acca915423c" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.396850 4718 scope.go:117] "RemoveContainer" containerID="d1066a1af9b63fc9b869f056d6dafb7bca7a08016a551c9379cb7c6155f53e0f" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.419140 4718 scope.go:117] "RemoveContainer" containerID="1bd9458474636a3f7139d8fa351f9f2a7afb035ed2f0149215e938e465c014c8" Nov 23 15:13:44 crc kubenswrapper[4718]: I1123 15:13:44.451530 4718 scope.go:117] "RemoveContainer" containerID="36fc12c4c4c04f3872dbd7dddd3dcf4c463fcaac134b0fceaab5eb6f76e13592" Nov 23 15:13:55 crc kubenswrapper[4718]: I1123 15:13:55.441227 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:13:55 crc kubenswrapper[4718]: E1123 15:13:55.442165 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:13:59 crc kubenswrapper[4718]: I1123 15:13:59.039826 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d4jtn"] Nov 23 15:13:59 crc kubenswrapper[4718]: I1123 15:13:59.051515 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d4jtn"] Nov 23 15:14:00 crc kubenswrapper[4718]: I1123 15:14:00.451521 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec5dfd6e-eef7-47d1-a7bf-a3f406387f34" path="/var/lib/kubelet/pods/ec5dfd6e-eef7-47d1-a7bf-a3f406387f34/volumes" Nov 23 15:14:05 crc kubenswrapper[4718]: I1123 15:14:05.481362 4718 generic.go:334] "Generic (PLEG): container finished" podID="6bc22e89-875e-4f3a-93b3-5e738e897c23" containerID="6d621fc6f55825c7631797166eb97720d3594745e6ac66ee76abbc0e7d033864" exitCode=0 Nov 23 15:14:05 crc kubenswrapper[4718]: I1123 15:14:05.481465 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" event={"ID":"6bc22e89-875e-4f3a-93b3-5e738e897c23","Type":"ContainerDied","Data":"6d621fc6f55825c7631797166eb97720d3594745e6ac66ee76abbc0e7d033864"} Nov 23 15:14:06 crc kubenswrapper[4718]: I1123 15:14:06.892975 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.044207 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-ssh-key\") pod \"6bc22e89-875e-4f3a-93b3-5e738e897c23\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.044356 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slv8g\" (UniqueName: \"kubernetes.io/projected/6bc22e89-875e-4f3a-93b3-5e738e897c23-kube-api-access-slv8g\") pod \"6bc22e89-875e-4f3a-93b3-5e738e897c23\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.044504 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-inventory\") pod \"6bc22e89-875e-4f3a-93b3-5e738e897c23\" (UID: \"6bc22e89-875e-4f3a-93b3-5e738e897c23\") " Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.049762 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc22e89-875e-4f3a-93b3-5e738e897c23-kube-api-access-slv8g" (OuterVolumeSpecName: "kube-api-access-slv8g") pod "6bc22e89-875e-4f3a-93b3-5e738e897c23" (UID: "6bc22e89-875e-4f3a-93b3-5e738e897c23"). InnerVolumeSpecName "kube-api-access-slv8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.072612 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-inventory" (OuterVolumeSpecName: "inventory") pod "6bc22e89-875e-4f3a-93b3-5e738e897c23" (UID: "6bc22e89-875e-4f3a-93b3-5e738e897c23"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.077140 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6bc22e89-875e-4f3a-93b3-5e738e897c23" (UID: "6bc22e89-875e-4f3a-93b3-5e738e897c23"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.146509 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.146545 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bc22e89-875e-4f3a-93b3-5e738e897c23-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.146555 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slv8g\" (UniqueName: \"kubernetes.io/projected/6bc22e89-875e-4f3a-93b3-5e738e897c23-kube-api-access-slv8g\") on node \"crc\" DevicePath \"\"" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.469908 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:14:07 crc kubenswrapper[4718]: E1123 15:14:07.470427 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.504713 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" event={"ID":"6bc22e89-875e-4f3a-93b3-5e738e897c23","Type":"ContainerDied","Data":"04de1a3f477bcc82034fd6bdafb0f9fa6c3d50c75f0a33d4662a3eee7de159fa"} Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.504762 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04de1a3f477bcc82034fd6bdafb0f9fa6c3d50c75f0a33d4662a3eee7de159fa" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.504819 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z276b" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.605129 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2"] Nov 23 15:14:07 crc kubenswrapper[4718]: E1123 15:14:07.605612 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc22e89-875e-4f3a-93b3-5e738e897c23" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.605634 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc22e89-875e-4f3a-93b3-5e738e897c23" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.606178 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc22e89-875e-4f3a-93b3-5e738e897c23" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.607281 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.609869 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.610393 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.614864 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.615108 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.627535 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2"] Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.775799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9znn\" (UniqueName: \"kubernetes.io/projected/4785d6c3-899e-4bfd-9333-b4493a1cae09-kube-api-access-f9znn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.776517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.776685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.878228 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.878354 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9znn\" (UniqueName: \"kubernetes.io/projected/4785d6c3-899e-4bfd-9333-b4493a1cae09-kube-api-access-f9znn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.878425 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.884588 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.887193 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.896137 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9znn\" (UniqueName: \"kubernetes.io/projected/4785d6c3-899e-4bfd-9333-b4493a1cae09-kube-api-access-f9znn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:07 crc kubenswrapper[4718]: I1123 15:14:07.941462 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:14:08 crc kubenswrapper[4718]: I1123 15:14:08.512954 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2"] Nov 23 15:14:09 crc kubenswrapper[4718]: I1123 15:14:09.527758 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" event={"ID":"4785d6c3-899e-4bfd-9333-b4493a1cae09","Type":"ContainerStarted","Data":"e05d5b6c5f1e6fef8a29ea737d0192bb58b2bea69f286fe9ef1a4b268af183ab"} Nov 23 15:14:09 crc kubenswrapper[4718]: I1123 15:14:09.528220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" event={"ID":"4785d6c3-899e-4bfd-9333-b4493a1cae09","Type":"ContainerStarted","Data":"69a3b149e1c506db89b163ef0c07d614fb89663d75720120fd8739e72c9f6206"} Nov 23 15:14:09 crc kubenswrapper[4718]: I1123 15:14:09.556750 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" podStartSLOduration=2.100037809 podStartE2EDuration="2.556729297s" podCreationTimestamp="2025-11-23 15:14:07 +0000 UTC" firstStartedPulling="2025-11-23 15:14:08.530188241 +0000 UTC m=+1699.769808085" lastFinishedPulling="2025-11-23 15:14:08.986879689 +0000 UTC m=+1700.226499573" observedRunningTime="2025-11-23 15:14:09.543803762 +0000 UTC m=+1700.783423616" watchObservedRunningTime="2025-11-23 15:14:09.556729297 +0000 UTC m=+1700.796349151" Nov 23 15:14:18 crc kubenswrapper[4718]: I1123 15:14:18.441027 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:14:18 crc kubenswrapper[4718]: E1123 15:14:18.441942 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:14:22 crc kubenswrapper[4718]: I1123 15:14:22.047400 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qtcsw"] Nov 23 15:14:22 crc kubenswrapper[4718]: I1123 15:14:22.056692 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qtcsw"] Nov 23 15:14:22 crc kubenswrapper[4718]: I1123 15:14:22.455741 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d8481e-f2de-46c5-8b56-7c85e054378d" path="/var/lib/kubelet/pods/c5d8481e-f2de-46c5-8b56-7c85e054378d/volumes" Nov 23 15:14:24 crc kubenswrapper[4718]: I1123 15:14:24.038918 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2kvx"] Nov 23 15:14:24 crc kubenswrapper[4718]: I1123 15:14:24.049198 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2kvx"] Nov 23 15:14:24 crc kubenswrapper[4718]: I1123 15:14:24.452690 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371cc685-4543-4483-83bf-bb04f1d750b5" path="/var/lib/kubelet/pods/371cc685-4543-4483-83bf-bb04f1d750b5/volumes" Nov 23 15:14:32 crc kubenswrapper[4718]: I1123 15:14:32.441594 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:14:32 crc kubenswrapper[4718]: E1123 15:14:32.442322 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:14:43 crc kubenswrapper[4718]: I1123 15:14:43.447749 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:14:43 crc kubenswrapper[4718]: E1123 15:14:43.450395 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:14:44 crc kubenswrapper[4718]: I1123 15:14:44.637329 4718 scope.go:117] "RemoveContainer" containerID="bbf363906b79c5b22a10349c4be4c0518138546ef59f20f6d612c1741a833b4b" Nov 23 15:14:44 crc kubenswrapper[4718]: I1123 15:14:44.684024 4718 scope.go:117] "RemoveContainer" containerID="786c7aed7a2bc3d301bedc9a8377f85b18e5372e6c0fc3d62fb9c39b98483139" Nov 23 15:14:44 crc kubenswrapper[4718]: I1123 15:14:44.755708 4718 scope.go:117] "RemoveContainer" containerID="20bdfde85037bd0b202b821ceb6b80b89569184ba715667609480e9b298198ae" Nov 23 15:14:58 crc kubenswrapper[4718]: I1123 15:14:58.441597 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:14:58 crc kubenswrapper[4718]: E1123 15:14:58.442894 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.150709 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj"] Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.152177 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.157713 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.158499 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.160242 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj"] Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.350016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e575724-7675-4e80-855c-89eb6cf4ff18-secret-volume\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.350230 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5dk\" (UniqueName: \"kubernetes.io/projected/0e575724-7675-4e80-855c-89eb6cf4ff18-kube-api-access-zs5dk\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.350310 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e575724-7675-4e80-855c-89eb6cf4ff18-config-volume\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.452044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e575724-7675-4e80-855c-89eb6cf4ff18-secret-volume\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.452644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5dk\" (UniqueName: \"kubernetes.io/projected/0e575724-7675-4e80-855c-89eb6cf4ff18-kube-api-access-zs5dk\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.453423 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e575724-7675-4e80-855c-89eb6cf4ff18-config-volume\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.454713 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e575724-7675-4e80-855c-89eb6cf4ff18-config-volume\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.459146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e575724-7675-4e80-855c-89eb6cf4ff18-secret-volume\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.487121 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5dk\" (UniqueName: \"kubernetes.io/projected/0e575724-7675-4e80-855c-89eb6cf4ff18-kube-api-access-zs5dk\") pod \"collect-profiles-29398515-mtljj\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:00 crc kubenswrapper[4718]: I1123 15:15:00.772878 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:01 crc kubenswrapper[4718]: I1123 15:15:01.346228 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj"] Nov 23 15:15:02 crc kubenswrapper[4718]: I1123 15:15:02.023678 4718 generic.go:334] "Generic (PLEG): container finished" podID="0e575724-7675-4e80-855c-89eb6cf4ff18" containerID="c678dee0194f262d1fe3cb90d783fcb41f3a48d14f6118f6e92e8da5e59dcbdb" exitCode=0 Nov 23 15:15:02 crc kubenswrapper[4718]: I1123 15:15:02.023825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" event={"ID":"0e575724-7675-4e80-855c-89eb6cf4ff18","Type":"ContainerDied","Data":"c678dee0194f262d1fe3cb90d783fcb41f3a48d14f6118f6e92e8da5e59dcbdb"} Nov 23 15:15:02 crc kubenswrapper[4718]: I1123 15:15:02.024062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" event={"ID":"0e575724-7675-4e80-855c-89eb6cf4ff18","Type":"ContainerStarted","Data":"7a734e33f10571bd0f122203da18bf0b0427a9609ff0ccf6d85346161c6f3000"} Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.032346 4718 generic.go:334] "Generic (PLEG): container finished" podID="4785d6c3-899e-4bfd-9333-b4493a1cae09" containerID="e05d5b6c5f1e6fef8a29ea737d0192bb58b2bea69f286fe9ef1a4b268af183ab" exitCode=0 Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.032400 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" event={"ID":"4785d6c3-899e-4bfd-9333-b4493a1cae09","Type":"ContainerDied","Data":"e05d5b6c5f1e6fef8a29ea737d0192bb58b2bea69f286fe9ef1a4b268af183ab"} Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.370805 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.508909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e575724-7675-4e80-855c-89eb6cf4ff18-config-volume\") pod \"0e575724-7675-4e80-855c-89eb6cf4ff18\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.509118 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs5dk\" (UniqueName: \"kubernetes.io/projected/0e575724-7675-4e80-855c-89eb6cf4ff18-kube-api-access-zs5dk\") pod \"0e575724-7675-4e80-855c-89eb6cf4ff18\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.509864 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e575724-7675-4e80-855c-89eb6cf4ff18-secret-volume\") pod \"0e575724-7675-4e80-855c-89eb6cf4ff18\" (UID: \"0e575724-7675-4e80-855c-89eb6cf4ff18\") " Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.510897 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e575724-7675-4e80-855c-89eb6cf4ff18-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e575724-7675-4e80-855c-89eb6cf4ff18" (UID: "0e575724-7675-4e80-855c-89eb6cf4ff18"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.515650 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e575724-7675-4e80-855c-89eb6cf4ff18-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e575724-7675-4e80-855c-89eb6cf4ff18" (UID: "0e575724-7675-4e80-855c-89eb6cf4ff18"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.515688 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e575724-7675-4e80-855c-89eb6cf4ff18-kube-api-access-zs5dk" (OuterVolumeSpecName: "kube-api-access-zs5dk") pod "0e575724-7675-4e80-855c-89eb6cf4ff18" (UID: "0e575724-7675-4e80-855c-89eb6cf4ff18"). InnerVolumeSpecName "kube-api-access-zs5dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.612482 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e575724-7675-4e80-855c-89eb6cf4ff18-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.612510 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs5dk\" (UniqueName: \"kubernetes.io/projected/0e575724-7675-4e80-855c-89eb6cf4ff18-kube-api-access-zs5dk\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:03 crc kubenswrapper[4718]: I1123 15:15:03.612521 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e575724-7675-4e80-855c-89eb6cf4ff18-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.046597 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" event={"ID":"0e575724-7675-4e80-855c-89eb6cf4ff18","Type":"ContainerDied","Data":"7a734e33f10571bd0f122203da18bf0b0427a9609ff0ccf6d85346161c6f3000"} Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.046647 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a734e33f10571bd0f122203da18bf0b0427a9609ff0ccf6d85346161c6f3000" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.046652 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398515-mtljj" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.520697 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.638423 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-inventory\") pod \"4785d6c3-899e-4bfd-9333-b4493a1cae09\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.638533 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-ssh-key\") pod \"4785d6c3-899e-4bfd-9333-b4493a1cae09\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.638569 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9znn\" (UniqueName: \"kubernetes.io/projected/4785d6c3-899e-4bfd-9333-b4493a1cae09-kube-api-access-f9znn\") pod \"4785d6c3-899e-4bfd-9333-b4493a1cae09\" (UID: \"4785d6c3-899e-4bfd-9333-b4493a1cae09\") " Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.644706 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4785d6c3-899e-4bfd-9333-b4493a1cae09-kube-api-access-f9znn" (OuterVolumeSpecName: "kube-api-access-f9znn") pod "4785d6c3-899e-4bfd-9333-b4493a1cae09" (UID: "4785d6c3-899e-4bfd-9333-b4493a1cae09"). InnerVolumeSpecName "kube-api-access-f9znn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.670363 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-inventory" (OuterVolumeSpecName: "inventory") pod "4785d6c3-899e-4bfd-9333-b4493a1cae09" (UID: "4785d6c3-899e-4bfd-9333-b4493a1cae09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.674029 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4785d6c3-899e-4bfd-9333-b4493a1cae09" (UID: "4785d6c3-899e-4bfd-9333-b4493a1cae09"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.741765 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.741823 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9znn\" (UniqueName: \"kubernetes.io/projected/4785d6c3-899e-4bfd-9333-b4493a1cae09-kube-api-access-f9znn\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:04 crc kubenswrapper[4718]: I1123 15:15:04.741836 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4785d6c3-899e-4bfd-9333-b4493a1cae09-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.060586 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" event={"ID":"4785d6c3-899e-4bfd-9333-b4493a1cae09","Type":"ContainerDied","Data":"69a3b149e1c506db89b163ef0c07d614fb89663d75720120fd8739e72c9f6206"} Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.060623 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a3b149e1c506db89b163ef0c07d614fb89663d75720120fd8739e72c9f6206" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.060726 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.168731 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lsf9c"] Nov 23 15:15:05 crc kubenswrapper[4718]: E1123 15:15:05.169139 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e575724-7675-4e80-855c-89eb6cf4ff18" containerName="collect-profiles" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.169159 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e575724-7675-4e80-855c-89eb6cf4ff18" containerName="collect-profiles" Nov 23 15:15:05 crc kubenswrapper[4718]: E1123 15:15:05.169215 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4785d6c3-899e-4bfd-9333-b4493a1cae09" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.169226 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4785d6c3-899e-4bfd-9333-b4493a1cae09" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.169490 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e575724-7675-4e80-855c-89eb6cf4ff18" containerName="collect-profiles" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.169516 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4785d6c3-899e-4bfd-9333-b4493a1cae09" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.170289 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.179280 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lsf9c"] Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.180275 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.180293 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.180650 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.180429 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.258525 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.258819 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvj9\" (UniqueName: \"kubernetes.io/projected/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-kube-api-access-7kvj9\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.258913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.360107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.360219 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.360259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kvj9\" (UniqueName: \"kubernetes.io/projected/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-kube-api-access-7kvj9\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.365215 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.375181 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.385632 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kvj9\" (UniqueName: \"kubernetes.io/projected/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-kube-api-access-7kvj9\") pod \"ssh-known-hosts-edpm-deployment-lsf9c\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:05 crc kubenswrapper[4718]: I1123 15:15:05.497480 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:06 crc kubenswrapper[4718]: I1123 15:15:06.044799 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lsf9c"] Nov 23 15:15:06 crc kubenswrapper[4718]: W1123 15:15:06.051946 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84dd6f8_ea71_48e6_ae5f_d8f6a2af9485.slice/crio-f2c5324124071fb2ced7d09e1beb723328bf9f1af0ac5e10535670dfe5f8d633 WatchSource:0}: Error finding container f2c5324124071fb2ced7d09e1beb723328bf9f1af0ac5e10535670dfe5f8d633: Status 404 returned error can't find the container with id f2c5324124071fb2ced7d09e1beb723328bf9f1af0ac5e10535670dfe5f8d633 Nov 23 15:15:06 crc kubenswrapper[4718]: I1123 15:15:06.054949 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 15:15:06 crc kubenswrapper[4718]: I1123 15:15:06.072002 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" event={"ID":"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485","Type":"ContainerStarted","Data":"f2c5324124071fb2ced7d09e1beb723328bf9f1af0ac5e10535670dfe5f8d633"} Nov 23 15:15:07 crc kubenswrapper[4718]: I1123 15:15:07.081062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" event={"ID":"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485","Type":"ContainerStarted","Data":"5604d32c2f18ddf20d0b1647f906ff3fdea8ad0a88f190f32323afc7251b89a0"} Nov 23 15:15:07 crc kubenswrapper[4718]: I1123 15:15:07.099697 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" podStartSLOduration=1.5879879799999999 podStartE2EDuration="2.099673945s" podCreationTimestamp="2025-11-23 15:15:05 +0000 UTC" firstStartedPulling="2025-11-23 15:15:06.054685637 +0000 UTC m=+1757.294305491" lastFinishedPulling="2025-11-23 15:15:06.566371592 +0000 UTC m=+1757.805991456" observedRunningTime="2025-11-23 15:15:07.098553705 +0000 UTC m=+1758.338173569" watchObservedRunningTime="2025-11-23 15:15:07.099673945 +0000 UTC m=+1758.339293799" Nov 23 15:15:08 crc kubenswrapper[4718]: I1123 15:15:08.046766 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mlqkj"] Nov 23 15:15:08 crc kubenswrapper[4718]: I1123 15:15:08.059182 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mlqkj"] Nov 23 15:15:08 crc kubenswrapper[4718]: I1123 15:15:08.454261 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68623fd2-db0d-4196-8cfa-59411b9d7f68" path="/var/lib/kubelet/pods/68623fd2-db0d-4196-8cfa-59411b9d7f68/volumes" Nov 23 15:15:10 crc kubenswrapper[4718]: I1123 15:15:10.440638 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:15:10 crc kubenswrapper[4718]: E1123 15:15:10.440968 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:15:14 crc kubenswrapper[4718]: I1123 15:15:14.164670 4718 generic.go:334] "Generic (PLEG): container finished" podID="e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485" containerID="5604d32c2f18ddf20d0b1647f906ff3fdea8ad0a88f190f32323afc7251b89a0" exitCode=0 Nov 23 15:15:14 crc kubenswrapper[4718]: I1123 15:15:14.164774 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" event={"ID":"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485","Type":"ContainerDied","Data":"5604d32c2f18ddf20d0b1647f906ff3fdea8ad0a88f190f32323afc7251b89a0"} Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.620111 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.758575 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-ssh-key-openstack-edpm-ipam\") pod \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.759624 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-inventory-0\") pod \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.759683 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kvj9\" (UniqueName: \"kubernetes.io/projected/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-kube-api-access-7kvj9\") pod \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\" (UID: \"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485\") " Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.777611 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-kube-api-access-7kvj9" (OuterVolumeSpecName: "kube-api-access-7kvj9") pod "e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485" (UID: "e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485"). InnerVolumeSpecName "kube-api-access-7kvj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.794466 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485" (UID: "e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.805478 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485" (UID: "e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.861096 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.861131 4718 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:15 crc kubenswrapper[4718]: I1123 15:15:15.861141 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kvj9\" (UniqueName: \"kubernetes.io/projected/e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485-kube-api-access-7kvj9\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.187497 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" event={"ID":"e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485","Type":"ContainerDied","Data":"f2c5324124071fb2ced7d09e1beb723328bf9f1af0ac5e10535670dfe5f8d633"} Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.187542 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2c5324124071fb2ced7d09e1beb723328bf9f1af0ac5e10535670dfe5f8d633" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.187571 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsf9c" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.261406 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk"] Nov 23 15:15:16 crc kubenswrapper[4718]: E1123 15:15:16.261843 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485" containerName="ssh-known-hosts-edpm-deployment" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.261856 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485" containerName="ssh-known-hosts-edpm-deployment" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.262026 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485" containerName="ssh-known-hosts-edpm-deployment" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.262605 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.265120 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.265170 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.265263 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.265490 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.270001 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hr24\" (UniqueName: \"kubernetes.io/projected/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-kube-api-access-7hr24\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.270049 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.270180 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.276752 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk"] Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.371685 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hr24\" (UniqueName: \"kubernetes.io/projected/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-kube-api-access-7hr24\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.371734 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.371784 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.375272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.375715 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.386981 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hr24\" (UniqueName: \"kubernetes.io/projected/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-kube-api-access-7hr24\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4ngvk\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:16 crc kubenswrapper[4718]: I1123 15:15:16.624939 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:17 crc kubenswrapper[4718]: I1123 15:15:17.208354 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk"] Nov 23 15:15:18 crc kubenswrapper[4718]: I1123 15:15:18.207536 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" event={"ID":"a4cabb6a-7840-4f18-af7f-03bfffcb4c11","Type":"ContainerStarted","Data":"89ef5fcd2ea7af075291f8607f34c35bebc8b9261275859b7d8d6317d2373ba4"} Nov 23 15:15:19 crc kubenswrapper[4718]: I1123 15:15:19.220274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" event={"ID":"a4cabb6a-7840-4f18-af7f-03bfffcb4c11","Type":"ContainerStarted","Data":"b4fe2f21fefedf3974938a97d0f9c6bd8c4dd9e3856890addba267948decf5d3"} Nov 23 15:15:19 crc kubenswrapper[4718]: I1123 15:15:19.241926 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" podStartSLOduration=2.610750413 podStartE2EDuration="3.241904937s" podCreationTimestamp="2025-11-23 15:15:16 +0000 UTC" firstStartedPulling="2025-11-23 15:15:17.213263508 +0000 UTC m=+1768.452883372" lastFinishedPulling="2025-11-23 15:15:17.844418012 +0000 UTC m=+1769.084037896" observedRunningTime="2025-11-23 15:15:19.23714236 +0000 UTC m=+1770.476762244" watchObservedRunningTime="2025-11-23 15:15:19.241904937 +0000 UTC m=+1770.481524781" Nov 23 15:15:23 crc kubenswrapper[4718]: I1123 15:15:23.441529 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:15:23 crc kubenswrapper[4718]: E1123 15:15:23.442122 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:15:26 crc kubenswrapper[4718]: I1123 15:15:26.288965 4718 generic.go:334] "Generic (PLEG): container finished" podID="a4cabb6a-7840-4f18-af7f-03bfffcb4c11" containerID="b4fe2f21fefedf3974938a97d0f9c6bd8c4dd9e3856890addba267948decf5d3" exitCode=0 Nov 23 15:15:26 crc kubenswrapper[4718]: I1123 15:15:26.289044 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" event={"ID":"a4cabb6a-7840-4f18-af7f-03bfffcb4c11","Type":"ContainerDied","Data":"b4fe2f21fefedf3974938a97d0f9c6bd8c4dd9e3856890addba267948decf5d3"} Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.692775 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.834046 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hr24\" (UniqueName: \"kubernetes.io/projected/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-kube-api-access-7hr24\") pod \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.834135 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-inventory\") pod \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.834635 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-ssh-key\") pod \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\" (UID: \"a4cabb6a-7840-4f18-af7f-03bfffcb4c11\") " Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.848047 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-kube-api-access-7hr24" (OuterVolumeSpecName: "kube-api-access-7hr24") pod "a4cabb6a-7840-4f18-af7f-03bfffcb4c11" (UID: "a4cabb6a-7840-4f18-af7f-03bfffcb4c11"). InnerVolumeSpecName "kube-api-access-7hr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.864498 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4cabb6a-7840-4f18-af7f-03bfffcb4c11" (UID: "a4cabb6a-7840-4f18-af7f-03bfffcb4c11"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.866208 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-inventory" (OuterVolumeSpecName: "inventory") pod "a4cabb6a-7840-4f18-af7f-03bfffcb4c11" (UID: "a4cabb6a-7840-4f18-af7f-03bfffcb4c11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.937604 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.937642 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hr24\" (UniqueName: \"kubernetes.io/projected/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-kube-api-access-7hr24\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:27 crc kubenswrapper[4718]: I1123 15:15:27.937653 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4cabb6a-7840-4f18-af7f-03bfffcb4c11-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.306979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" event={"ID":"a4cabb6a-7840-4f18-af7f-03bfffcb4c11","Type":"ContainerDied","Data":"89ef5fcd2ea7af075291f8607f34c35bebc8b9261275859b7d8d6317d2373ba4"} Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.307021 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ef5fcd2ea7af075291f8607f34c35bebc8b9261275859b7d8d6317d2373ba4" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.307341 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4ngvk" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.380024 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr"] Nov 23 15:15:28 crc kubenswrapper[4718]: E1123 15:15:28.380668 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cabb6a-7840-4f18-af7f-03bfffcb4c11" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.380690 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cabb6a-7840-4f18-af7f-03bfffcb4c11" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.380968 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4cabb6a-7840-4f18-af7f-03bfffcb4c11" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.381798 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.385249 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.385249 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.385578 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.385764 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.388640 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr"] Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.446123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7s9z\" (UniqueName: \"kubernetes.io/projected/ce6b5147-989d-4b02-987b-1b6b3c4d9460-kube-api-access-s7s9z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.446495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.446545 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.548782 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7s9z\" (UniqueName: \"kubernetes.io/projected/ce6b5147-989d-4b02-987b-1b6b3c4d9460-kube-api-access-s7s9z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.548889 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.548957 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.554456 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.556429 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.567169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7s9z\" (UniqueName: \"kubernetes.io/projected/ce6b5147-989d-4b02-987b-1b6b3c4d9460-kube-api-access-s7s9z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:28 crc kubenswrapper[4718]: I1123 15:15:28.698772 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:29 crc kubenswrapper[4718]: I1123 15:15:29.266035 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr"] Nov 23 15:15:29 crc kubenswrapper[4718]: I1123 15:15:29.320870 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" event={"ID":"ce6b5147-989d-4b02-987b-1b6b3c4d9460","Type":"ContainerStarted","Data":"59b51b829a3a943dcfd07309f424761119b9b5e7f78d48ea462ec90e009bda54"} Nov 23 15:15:30 crc kubenswrapper[4718]: I1123 15:15:30.329679 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" event={"ID":"ce6b5147-989d-4b02-987b-1b6b3c4d9460","Type":"ContainerStarted","Data":"2de77b9fb08fc456b37e91ffdcee50b5976cc2fbdfa843db6e00301838447c51"} Nov 23 15:15:30 crc kubenswrapper[4718]: I1123 15:15:30.354017 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" podStartSLOduration=1.9138667329999999 podStartE2EDuration="2.353992159s" podCreationTimestamp="2025-11-23 15:15:28 +0000 UTC" firstStartedPulling="2025-11-23 15:15:29.268304435 +0000 UTC m=+1780.507924279" lastFinishedPulling="2025-11-23 15:15:29.708429861 +0000 UTC m=+1780.948049705" observedRunningTime="2025-11-23 15:15:30.350037204 +0000 UTC m=+1781.589657048" watchObservedRunningTime="2025-11-23 15:15:30.353992159 +0000 UTC m=+1781.593612003" Nov 23 15:15:38 crc kubenswrapper[4718]: I1123 15:15:38.440732 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:15:38 crc kubenswrapper[4718]: E1123 15:15:38.441523 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:15:39 crc kubenswrapper[4718]: I1123 15:15:39.415585 4718 generic.go:334] "Generic (PLEG): container finished" podID="ce6b5147-989d-4b02-987b-1b6b3c4d9460" containerID="2de77b9fb08fc456b37e91ffdcee50b5976cc2fbdfa843db6e00301838447c51" exitCode=0 Nov 23 15:15:39 crc kubenswrapper[4718]: I1123 15:15:39.415639 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" event={"ID":"ce6b5147-989d-4b02-987b-1b6b3c4d9460","Type":"ContainerDied","Data":"2de77b9fb08fc456b37e91ffdcee50b5976cc2fbdfa843db6e00301838447c51"} Nov 23 15:15:40 crc kubenswrapper[4718]: I1123 15:15:40.857972 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:40 crc kubenswrapper[4718]: I1123 15:15:40.982832 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-inventory\") pod \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " Nov 23 15:15:40 crc kubenswrapper[4718]: I1123 15:15:40.983001 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7s9z\" (UniqueName: \"kubernetes.io/projected/ce6b5147-989d-4b02-987b-1b6b3c4d9460-kube-api-access-s7s9z\") pod \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " Nov 23 15:15:40 crc kubenswrapper[4718]: I1123 15:15:40.983036 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-ssh-key\") pod \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\" (UID: \"ce6b5147-989d-4b02-987b-1b6b3c4d9460\") " Nov 23 15:15:40 crc kubenswrapper[4718]: I1123 15:15:40.987638 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6b5147-989d-4b02-987b-1b6b3c4d9460-kube-api-access-s7s9z" (OuterVolumeSpecName: "kube-api-access-s7s9z") pod "ce6b5147-989d-4b02-987b-1b6b3c4d9460" (UID: "ce6b5147-989d-4b02-987b-1b6b3c4d9460"). InnerVolumeSpecName "kube-api-access-s7s9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.011156 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ce6b5147-989d-4b02-987b-1b6b3c4d9460" (UID: "ce6b5147-989d-4b02-987b-1b6b3c4d9460"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.015237 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-inventory" (OuterVolumeSpecName: "inventory") pod "ce6b5147-989d-4b02-987b-1b6b3c4d9460" (UID: "ce6b5147-989d-4b02-987b-1b6b3c4d9460"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.084916 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.084956 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7s9z\" (UniqueName: \"kubernetes.io/projected/ce6b5147-989d-4b02-987b-1b6b3c4d9460-kube-api-access-s7s9z\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.084968 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b5147-989d-4b02-987b-1b6b3c4d9460-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.438512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" event={"ID":"ce6b5147-989d-4b02-987b-1b6b3c4d9460","Type":"ContainerDied","Data":"59b51b829a3a943dcfd07309f424761119b9b5e7f78d48ea462ec90e009bda54"} Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.438548 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b51b829a3a943dcfd07309f424761119b9b5e7f78d48ea462ec90e009bda54" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.438613 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.534101 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx"] Nov 23 15:15:41 crc kubenswrapper[4718]: E1123 15:15:41.534718 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6b5147-989d-4b02-987b-1b6b3c4d9460" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.534734 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6b5147-989d-4b02-987b-1b6b3c4d9460" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.534943 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6b5147-989d-4b02-987b-1b6b3c4d9460" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.535649 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.537643 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.537643 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.537731 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.537771 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.537922 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.537961 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.537976 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.541529 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.547550 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx"] Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.594828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.594880 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.594905 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.594948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.594977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595060 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595137 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595192 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595263 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595329 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595410 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595597 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.595640 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz8r6\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-kube-api-access-sz8r6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697167 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697247 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697277 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz8r6\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-kube-api-access-sz8r6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697312 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697338 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697355 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697389 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697468 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697485 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697511 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697528 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697554 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.697595 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.701382 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.704112 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.704315 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.704324 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.706322 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.706372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.706376 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.706653 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.706755 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.706776 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.706792 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.708018 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.711340 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.758642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz8r6\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-kube-api-access-sz8r6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:41 crc kubenswrapper[4718]: I1123 15:15:41.853262 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:15:42 crc kubenswrapper[4718]: I1123 15:15:42.455541 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx"] Nov 23 15:15:43 crc kubenswrapper[4718]: I1123 15:15:43.460863 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" event={"ID":"e04ec17a-ea46-47b8-ac60-1e2a22849b63","Type":"ContainerStarted","Data":"a4205ad92003f3d8335a261df74846e5cb8457765751451ceaf07d506a9bfb97"} Nov 23 15:15:43 crc kubenswrapper[4718]: I1123 15:15:43.461377 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" event={"ID":"e04ec17a-ea46-47b8-ac60-1e2a22849b63","Type":"ContainerStarted","Data":"8757a9a5d3c410990ccf56b360a79ba31979071124348e860a2f8a7651e0017b"} Nov 23 15:15:43 crc kubenswrapper[4718]: I1123 15:15:43.486066 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" podStartSLOduration=2.06117486 podStartE2EDuration="2.486042089s" podCreationTimestamp="2025-11-23 15:15:41 +0000 UTC" firstStartedPulling="2025-11-23 15:15:42.471439652 +0000 UTC m=+1793.711059496" lastFinishedPulling="2025-11-23 15:15:42.896306881 +0000 UTC m=+1794.135926725" observedRunningTime="2025-11-23 15:15:43.480200244 +0000 UTC m=+1794.719820088" watchObservedRunningTime="2025-11-23 15:15:43.486042089 +0000 UTC m=+1794.725661943" Nov 23 15:15:44 crc kubenswrapper[4718]: I1123 15:15:44.876977 4718 scope.go:117] "RemoveContainer" containerID="80b54743a44e3a9e422fc9ff38d08a7e6beb306b10cde640a5001be11292ef87" Nov 23 15:15:50 crc kubenswrapper[4718]: I1123 15:15:50.453241 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:15:50 crc kubenswrapper[4718]: E1123 15:15:50.454504 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:16:05 crc kubenswrapper[4718]: I1123 15:16:05.441410 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:16:05 crc kubenswrapper[4718]: I1123 15:16:05.670643 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"71d5542cf179ce80dcc7852dfd81dd988b779960126c8753eb23712fe413bccd"} Nov 23 15:16:21 crc kubenswrapper[4718]: I1123 15:16:21.811293 4718 generic.go:334] "Generic (PLEG): container finished" podID="e04ec17a-ea46-47b8-ac60-1e2a22849b63" containerID="a4205ad92003f3d8335a261df74846e5cb8457765751451ceaf07d506a9bfb97" exitCode=0 Nov 23 15:16:21 crc kubenswrapper[4718]: I1123 15:16:21.811391 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" event={"ID":"e04ec17a-ea46-47b8-ac60-1e2a22849b63","Type":"ContainerDied","Data":"a4205ad92003f3d8335a261df74846e5cb8457765751451ceaf07d506a9bfb97"} Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.207292 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316015 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-nova-combined-ca-bundle\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316271 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316372 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ovn-combined-ca-bundle\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316411 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-repo-setup-combined-ca-bundle\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316465 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-bootstrap-combined-ca-bundle\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316496 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz8r6\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-kube-api-access-sz8r6\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316530 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316576 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-neutron-metadata-combined-ca-bundle\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-telemetry-combined-ca-bundle\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316613 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316641 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-inventory\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316664 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316688 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-libvirt-combined-ca-bundle\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.316734 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ssh-key\") pod \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\" (UID: \"e04ec17a-ea46-47b8-ac60-1e2a22849b63\") " Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.324636 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.324916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.325236 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.325758 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.333593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.333617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-kube-api-access-sz8r6" (OuterVolumeSpecName: "kube-api-access-sz8r6") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "kube-api-access-sz8r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.333682 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.337647 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.337661 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.337916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.337946 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.345221 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.354234 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-inventory" (OuterVolumeSpecName: "inventory") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.355987 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e04ec17a-ea46-47b8-ac60-1e2a22849b63" (UID: "e04ec17a-ea46-47b8-ac60-1e2a22849b63"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.417923 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.417958 4718 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.417970 4718 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.417981 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz8r6\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-kube-api-access-sz8r6\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.417994 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418005 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418019 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418032 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418045 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418055 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418067 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418077 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418086 4718 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ec17a-ea46-47b8-ac60-1e2a22849b63-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.418097 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e04ec17a-ea46-47b8-ac60-1e2a22849b63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.836388 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" event={"ID":"e04ec17a-ea46-47b8-ac60-1e2a22849b63","Type":"ContainerDied","Data":"8757a9a5d3c410990ccf56b360a79ba31979071124348e860a2f8a7651e0017b"} Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.836654 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8757a9a5d3c410990ccf56b360a79ba31979071124348e860a2f8a7651e0017b" Nov 23 15:16:23 crc kubenswrapper[4718]: I1123 15:16:23.836433 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.320468 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx"] Nov 23 15:16:24 crc kubenswrapper[4718]: E1123 15:16:24.320914 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04ec17a-ea46-47b8-ac60-1e2a22849b63" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.320935 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04ec17a-ea46-47b8-ac60-1e2a22849b63" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.321197 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04ec17a-ea46-47b8-ac60-1e2a22849b63" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.321940 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.324385 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.324741 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.324909 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.325140 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.325793 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.334609 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx"] Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.434597 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.434684 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.434713 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnm9\" (UniqueName: \"kubernetes.io/projected/cea34a47-7094-4217-8086-0c4d9ec9d23f-kube-api-access-9gnm9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.434747 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.434809 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.537281 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.537406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.537432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnm9\" (UniqueName: \"kubernetes.io/projected/cea34a47-7094-4217-8086-0c4d9ec9d23f-kube-api-access-9gnm9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.537505 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.537559 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.539222 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.553986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.554182 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.554232 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.557145 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnm9\" (UniqueName: \"kubernetes.io/projected/cea34a47-7094-4217-8086-0c4d9ec9d23f-kube-api-access-9gnm9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b7ghx\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.652628 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:16:24 crc kubenswrapper[4718]: I1123 15:16:24.987098 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx"] Nov 23 15:16:24 crc kubenswrapper[4718]: W1123 15:16:24.991713 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcea34a47_7094_4217_8086_0c4d9ec9d23f.slice/crio-b81582240171377eaa035924e503fcfc100b23e5b3b5219e24bfc21ef8647d3c WatchSource:0}: Error finding container b81582240171377eaa035924e503fcfc100b23e5b3b5219e24bfc21ef8647d3c: Status 404 returned error can't find the container with id b81582240171377eaa035924e503fcfc100b23e5b3b5219e24bfc21ef8647d3c Nov 23 15:16:25 crc kubenswrapper[4718]: I1123 15:16:25.852704 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" event={"ID":"cea34a47-7094-4217-8086-0c4d9ec9d23f","Type":"ContainerStarted","Data":"b81582240171377eaa035924e503fcfc100b23e5b3b5219e24bfc21ef8647d3c"} Nov 23 15:16:26 crc kubenswrapper[4718]: I1123 15:16:26.868188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" event={"ID":"cea34a47-7094-4217-8086-0c4d9ec9d23f","Type":"ContainerStarted","Data":"cffac8c04627caf1fdeb4acf81bf6cedc3b82b2d15046b0b5d7c3c6e899a25a5"} Nov 23 15:16:26 crc kubenswrapper[4718]: I1123 15:16:26.886546 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" podStartSLOduration=1.886540826 podStartE2EDuration="2.886526115s" podCreationTimestamp="2025-11-23 15:16:24 +0000 UTC" firstStartedPulling="2025-11-23 15:16:24.993826611 +0000 UTC m=+1836.233446455" lastFinishedPulling="2025-11-23 15:16:25.99381186 +0000 UTC m=+1837.233431744" observedRunningTime="2025-11-23 15:16:26.882828546 +0000 UTC m=+1838.122448390" watchObservedRunningTime="2025-11-23 15:16:26.886526115 +0000 UTC m=+1838.126145959" Nov 23 15:17:29 crc kubenswrapper[4718]: I1123 15:17:29.450037 4718 generic.go:334] "Generic (PLEG): container finished" podID="cea34a47-7094-4217-8086-0c4d9ec9d23f" containerID="cffac8c04627caf1fdeb4acf81bf6cedc3b82b2d15046b0b5d7c3c6e899a25a5" exitCode=0 Nov 23 15:17:29 crc kubenswrapper[4718]: I1123 15:17:29.450129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" event={"ID":"cea34a47-7094-4217-8086-0c4d9ec9d23f","Type":"ContainerDied","Data":"cffac8c04627caf1fdeb4acf81bf6cedc3b82b2d15046b0b5d7c3c6e899a25a5"} Nov 23 15:17:30 crc kubenswrapper[4718]: I1123 15:17:30.940812 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.070207 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ssh-key\") pod \"cea34a47-7094-4217-8086-0c4d9ec9d23f\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.070316 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovncontroller-config-0\") pod \"cea34a47-7094-4217-8086-0c4d9ec9d23f\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.070337 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gnm9\" (UniqueName: \"kubernetes.io/projected/cea34a47-7094-4217-8086-0c4d9ec9d23f-kube-api-access-9gnm9\") pod \"cea34a47-7094-4217-8086-0c4d9ec9d23f\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.070358 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-inventory\") pod \"cea34a47-7094-4217-8086-0c4d9ec9d23f\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.070392 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovn-combined-ca-bundle\") pod \"cea34a47-7094-4217-8086-0c4d9ec9d23f\" (UID: \"cea34a47-7094-4217-8086-0c4d9ec9d23f\") " Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.075651 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea34a47-7094-4217-8086-0c4d9ec9d23f-kube-api-access-9gnm9" (OuterVolumeSpecName: "kube-api-access-9gnm9") pod "cea34a47-7094-4217-8086-0c4d9ec9d23f" (UID: "cea34a47-7094-4217-8086-0c4d9ec9d23f"). InnerVolumeSpecName "kube-api-access-9gnm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.077705 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cea34a47-7094-4217-8086-0c4d9ec9d23f" (UID: "cea34a47-7094-4217-8086-0c4d9ec9d23f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.104316 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-inventory" (OuterVolumeSpecName: "inventory") pod "cea34a47-7094-4217-8086-0c4d9ec9d23f" (UID: "cea34a47-7094-4217-8086-0c4d9ec9d23f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.104465 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cea34a47-7094-4217-8086-0c4d9ec9d23f" (UID: "cea34a47-7094-4217-8086-0c4d9ec9d23f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.109053 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cea34a47-7094-4217-8086-0c4d9ec9d23f" (UID: "cea34a47-7094-4217-8086-0c4d9ec9d23f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.173562 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.173594 4718 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.173609 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gnm9\" (UniqueName: \"kubernetes.io/projected/cea34a47-7094-4217-8086-0c4d9ec9d23f-kube-api-access-9gnm9\") on node \"crc\" DevicePath \"\"" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.173620 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.173634 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cea34a47-7094-4217-8086-0c4d9ec9d23f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.467117 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" event={"ID":"cea34a47-7094-4217-8086-0c4d9ec9d23f","Type":"ContainerDied","Data":"b81582240171377eaa035924e503fcfc100b23e5b3b5219e24bfc21ef8647d3c"} Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.467151 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b81582240171377eaa035924e503fcfc100b23e5b3b5219e24bfc21ef8647d3c" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.467193 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b7ghx" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.556622 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw"] Nov 23 15:17:31 crc kubenswrapper[4718]: E1123 15:17:31.557096 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea34a47-7094-4217-8086-0c4d9ec9d23f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.557118 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea34a47-7094-4217-8086-0c4d9ec9d23f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.557359 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea34a47-7094-4217-8086-0c4d9ec9d23f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.558191 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.564636 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.564687 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.564690 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.564643 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.564742 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.570041 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw"] Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.570838 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.681571 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.681689 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.681805 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.681971 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dssgg\" (UniqueName: \"kubernetes.io/projected/a7538df6-0093-40e0-b2da-59c2273f1f0f-kube-api-access-dssgg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.682027 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.682051 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.783824 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.784361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dssgg\" (UniqueName: \"kubernetes.io/projected/a7538df6-0093-40e0-b2da-59c2273f1f0f-kube-api-access-dssgg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.784430 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.784491 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.784666 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.784725 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.788564 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.788613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.788732 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.790352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.791733 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.800985 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dssgg\" (UniqueName: \"kubernetes.io/projected/a7538df6-0093-40e0-b2da-59c2273f1f0f-kube-api-access-dssgg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:31 crc kubenswrapper[4718]: I1123 15:17:31.926590 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:17:32 crc kubenswrapper[4718]: I1123 15:17:32.467639 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw"] Nov 23 15:17:33 crc kubenswrapper[4718]: I1123 15:17:33.483374 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" event={"ID":"a7538df6-0093-40e0-b2da-59c2273f1f0f","Type":"ContainerStarted","Data":"d0a2bbc183dff430016d0ece0f314a9d2216a7b679137eead9a4848ce539d37f"} Nov 23 15:17:33 crc kubenswrapper[4718]: I1123 15:17:33.483722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" event={"ID":"a7538df6-0093-40e0-b2da-59c2273f1f0f","Type":"ContainerStarted","Data":"53f76316aa643d745058676794d8b2026ab6f5dee82924250efa5a2b0a427ff1"} Nov 23 15:17:33 crc kubenswrapper[4718]: I1123 15:17:33.503599 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" podStartSLOduration=1.8827673489999999 podStartE2EDuration="2.503579815s" podCreationTimestamp="2025-11-23 15:17:31 +0000 UTC" firstStartedPulling="2025-11-23 15:17:32.472839414 +0000 UTC m=+1903.712459258" lastFinishedPulling="2025-11-23 15:17:33.09365188 +0000 UTC m=+1904.333271724" observedRunningTime="2025-11-23 15:17:33.496195087 +0000 UTC m=+1904.735814941" watchObservedRunningTime="2025-11-23 15:17:33.503579815 +0000 UTC m=+1904.743199679" Nov 23 15:18:20 crc kubenswrapper[4718]: I1123 15:18:20.880469 4718 generic.go:334] "Generic (PLEG): container finished" podID="a7538df6-0093-40e0-b2da-59c2273f1f0f" containerID="d0a2bbc183dff430016d0ece0f314a9d2216a7b679137eead9a4848ce539d37f" exitCode=0 Nov 23 15:18:20 crc kubenswrapper[4718]: I1123 15:18:20.880580 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" event={"ID":"a7538df6-0093-40e0-b2da-59c2273f1f0f","Type":"ContainerDied","Data":"d0a2bbc183dff430016d0ece0f314a9d2216a7b679137eead9a4848ce539d37f"} Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.253861 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.352092 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a7538df6-0093-40e0-b2da-59c2273f1f0f\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.352147 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-metadata-combined-ca-bundle\") pod \"a7538df6-0093-40e0-b2da-59c2273f1f0f\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.352191 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-inventory\") pod \"a7538df6-0093-40e0-b2da-59c2273f1f0f\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.352213 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-nova-metadata-neutron-config-0\") pod \"a7538df6-0093-40e0-b2da-59c2273f1f0f\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.352247 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-ssh-key\") pod \"a7538df6-0093-40e0-b2da-59c2273f1f0f\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.352422 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dssgg\" (UniqueName: \"kubernetes.io/projected/a7538df6-0093-40e0-b2da-59c2273f1f0f-kube-api-access-dssgg\") pod \"a7538df6-0093-40e0-b2da-59c2273f1f0f\" (UID: \"a7538df6-0093-40e0-b2da-59c2273f1f0f\") " Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.360654 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a7538df6-0093-40e0-b2da-59c2273f1f0f" (UID: "a7538df6-0093-40e0-b2da-59c2273f1f0f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.360683 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7538df6-0093-40e0-b2da-59c2273f1f0f-kube-api-access-dssgg" (OuterVolumeSpecName: "kube-api-access-dssgg") pod "a7538df6-0093-40e0-b2da-59c2273f1f0f" (UID: "a7538df6-0093-40e0-b2da-59c2273f1f0f"). InnerVolumeSpecName "kube-api-access-dssgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.382368 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-inventory" (OuterVolumeSpecName: "inventory") pod "a7538df6-0093-40e0-b2da-59c2273f1f0f" (UID: "a7538df6-0093-40e0-b2da-59c2273f1f0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.385310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a7538df6-0093-40e0-b2da-59c2273f1f0f" (UID: "a7538df6-0093-40e0-b2da-59c2273f1f0f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.388332 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a7538df6-0093-40e0-b2da-59c2273f1f0f" (UID: "a7538df6-0093-40e0-b2da-59c2273f1f0f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.390413 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a7538df6-0093-40e0-b2da-59c2273f1f0f" (UID: "a7538df6-0093-40e0-b2da-59c2273f1f0f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.454475 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.454513 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.454524 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.454534 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.454546 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7538df6-0093-40e0-b2da-59c2273f1f0f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.454554 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dssgg\" (UniqueName: \"kubernetes.io/projected/a7538df6-0093-40e0-b2da-59c2273f1f0f-kube-api-access-dssgg\") on node \"crc\" DevicePath \"\"" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.903115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" event={"ID":"a7538df6-0093-40e0-b2da-59c2273f1f0f","Type":"ContainerDied","Data":"53f76316aa643d745058676794d8b2026ab6f5dee82924250efa5a2b0a427ff1"} Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.903426 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f76316aa643d745058676794d8b2026ab6f5dee82924250efa5a2b0a427ff1" Nov 23 15:18:22 crc kubenswrapper[4718]: I1123 15:18:22.903196 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.052641 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.052704 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.056836 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj"] Nov 23 15:18:23 crc kubenswrapper[4718]: E1123 15:18:23.057412 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7538df6-0093-40e0-b2da-59c2273f1f0f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.057543 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7538df6-0093-40e0-b2da-59c2273f1f0f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.057850 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7538df6-0093-40e0-b2da-59c2273f1f0f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.058688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.066265 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.066496 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.066568 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.077178 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj"] Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.066652 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.066679 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.178659 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.178839 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.178959 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wln26\" (UniqueName: \"kubernetes.io/projected/e50f1c92-4d4a-4a83-bf46-c8268c34d373-kube-api-access-wln26\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.178986 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.179029 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.280279 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.280380 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.280415 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.280432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wln26\" (UniqueName: \"kubernetes.io/projected/e50f1c92-4d4a-4a83-bf46-c8268c34d373-kube-api-access-wln26\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.280517 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.287301 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.287323 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.287878 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.293216 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.298518 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wln26\" (UniqueName: \"kubernetes.io/projected/e50f1c92-4d4a-4a83-bf46-c8268c34d373-kube-api-access-wln26\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.391320 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.895054 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj"] Nov 23 15:18:23 crc kubenswrapper[4718]: W1123 15:18:23.895577 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode50f1c92_4d4a_4a83_bf46_c8268c34d373.slice/crio-a3838fbb0460ea821105acd3f58a4112bf2664c1c35b8c9086ec2115fff2262d WatchSource:0}: Error finding container a3838fbb0460ea821105acd3f58a4112bf2664c1c35b8c9086ec2115fff2262d: Status 404 returned error can't find the container with id a3838fbb0460ea821105acd3f58a4112bf2664c1c35b8c9086ec2115fff2262d Nov 23 15:18:23 crc kubenswrapper[4718]: I1123 15:18:23.912572 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" event={"ID":"e50f1c92-4d4a-4a83-bf46-c8268c34d373","Type":"ContainerStarted","Data":"a3838fbb0460ea821105acd3f58a4112bf2664c1c35b8c9086ec2115fff2262d"} Nov 23 15:18:24 crc kubenswrapper[4718]: I1123 15:18:24.936657 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" event={"ID":"e50f1c92-4d4a-4a83-bf46-c8268c34d373","Type":"ContainerStarted","Data":"84db83cdea50c6248206abbdab682e651bf000fa30da7e60b6de567b2a700e18"} Nov 23 15:18:24 crc kubenswrapper[4718]: I1123 15:18:24.958753 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" podStartSLOduration=1.536413301 podStartE2EDuration="1.958736173s" podCreationTimestamp="2025-11-23 15:18:23 +0000 UTC" firstStartedPulling="2025-11-23 15:18:23.898810326 +0000 UTC m=+1955.138430170" lastFinishedPulling="2025-11-23 15:18:24.321133198 +0000 UTC m=+1955.560753042" observedRunningTime="2025-11-23 15:18:24.952662052 +0000 UTC m=+1956.192281896" watchObservedRunningTime="2025-11-23 15:18:24.958736173 +0000 UTC m=+1956.198356017" Nov 23 15:18:53 crc kubenswrapper[4718]: I1123 15:18:53.053151 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:18:53 crc kubenswrapper[4718]: I1123 15:18:53.053859 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.053689 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.054432 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.054555 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.055610 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71d5542cf179ce80dcc7852dfd81dd988b779960126c8753eb23712fe413bccd"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.055722 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://71d5542cf179ce80dcc7852dfd81dd988b779960126c8753eb23712fe413bccd" gracePeriod=600 Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.469827 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="71d5542cf179ce80dcc7852dfd81dd988b779960126c8753eb23712fe413bccd" exitCode=0 Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.469875 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"71d5542cf179ce80dcc7852dfd81dd988b779960126c8753eb23712fe413bccd"} Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.470111 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1"} Nov 23 15:19:23 crc kubenswrapper[4718]: I1123 15:19:23.470138 4718 scope.go:117] "RemoveContainer" containerID="2c62aaeb57d7a7419661ffec0b62c70761aebd71565dd85e413b23c6d21d4f3c" Nov 23 15:19:36 crc kubenswrapper[4718]: I1123 15:19:36.848387 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxgc"] Nov 23 15:19:36 crc kubenswrapper[4718]: I1123 15:19:36.852088 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:36 crc kubenswrapper[4718]: I1123 15:19:36.864978 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxgc"] Nov 23 15:19:36 crc kubenswrapper[4718]: I1123 15:19:36.913878 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-utilities\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:36 crc kubenswrapper[4718]: I1123 15:19:36.914295 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-catalog-content\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:36 crc kubenswrapper[4718]: I1123 15:19:36.914523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzm6\" (UniqueName: \"kubernetes.io/projected/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-kube-api-access-npzm6\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.016599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-catalog-content\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.016654 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npzm6\" (UniqueName: \"kubernetes.io/projected/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-kube-api-access-npzm6\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.016750 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-utilities\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.017214 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-catalog-content\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.017227 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-utilities\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.036879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npzm6\" (UniqueName: \"kubernetes.io/projected/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-kube-api-access-npzm6\") pod \"redhat-marketplace-hvxgc\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.188808 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:37 crc kubenswrapper[4718]: I1123 15:19:37.709310 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxgc"] Nov 23 15:19:38 crc kubenswrapper[4718]: I1123 15:19:38.620903 4718 generic.go:334] "Generic (PLEG): container finished" podID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerID="f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5" exitCode=0 Nov 23 15:19:38 crc kubenswrapper[4718]: I1123 15:19:38.620994 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxgc" event={"ID":"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e","Type":"ContainerDied","Data":"f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5"} Nov 23 15:19:38 crc kubenswrapper[4718]: I1123 15:19:38.621311 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxgc" event={"ID":"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e","Type":"ContainerStarted","Data":"ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb"} Nov 23 15:19:39 crc kubenswrapper[4718]: I1123 15:19:39.629691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxgc" event={"ID":"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e","Type":"ContainerStarted","Data":"828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c"} Nov 23 15:19:40 crc kubenswrapper[4718]: I1123 15:19:40.641797 4718 generic.go:334] "Generic (PLEG): container finished" podID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerID="828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c" exitCode=0 Nov 23 15:19:40 crc kubenswrapper[4718]: I1123 15:19:40.641874 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxgc" event={"ID":"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e","Type":"ContainerDied","Data":"828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c"} Nov 23 15:19:41 crc kubenswrapper[4718]: I1123 15:19:41.653220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxgc" event={"ID":"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e","Type":"ContainerStarted","Data":"8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465"} Nov 23 15:19:41 crc kubenswrapper[4718]: I1123 15:19:41.675610 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvxgc" podStartSLOduration=3.249711166 podStartE2EDuration="5.675590642s" podCreationTimestamp="2025-11-23 15:19:36 +0000 UTC" firstStartedPulling="2025-11-23 15:19:38.623140084 +0000 UTC m=+2029.862759928" lastFinishedPulling="2025-11-23 15:19:41.04901956 +0000 UTC m=+2032.288639404" observedRunningTime="2025-11-23 15:19:41.672048408 +0000 UTC m=+2032.911668272" watchObservedRunningTime="2025-11-23 15:19:41.675590642 +0000 UTC m=+2032.915210486" Nov 23 15:19:47 crc kubenswrapper[4718]: I1123 15:19:47.189453 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:47 crc kubenswrapper[4718]: I1123 15:19:47.189998 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:47 crc kubenswrapper[4718]: I1123 15:19:47.262654 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:47 crc kubenswrapper[4718]: I1123 15:19:47.781126 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:47 crc kubenswrapper[4718]: I1123 15:19:47.825972 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxgc"] Nov 23 15:19:49 crc kubenswrapper[4718]: I1123 15:19:49.739756 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvxgc" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="registry-server" containerID="cri-o://8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465" gracePeriod=2 Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.274754 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.392326 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-utilities\") pod \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.392527 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-catalog-content\") pod \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.392581 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npzm6\" (UniqueName: \"kubernetes.io/projected/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-kube-api-access-npzm6\") pod \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\" (UID: \"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e\") " Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.394046 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-utilities" (OuterVolumeSpecName: "utilities") pod "45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" (UID: "45ef26b9-ba8a-44d4-b46c-2d2e34909e2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.405083 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-kube-api-access-npzm6" (OuterVolumeSpecName: "kube-api-access-npzm6") pod "45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" (UID: "45ef26b9-ba8a-44d4-b46c-2d2e34909e2e"). InnerVolumeSpecName "kube-api-access-npzm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.412746 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" (UID: "45ef26b9-ba8a-44d4-b46c-2d2e34909e2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.496247 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.497145 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npzm6\" (UniqueName: \"kubernetes.io/projected/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-kube-api-access-npzm6\") on node \"crc\" DevicePath \"\"" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.497175 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.750017 4718 generic.go:334] "Generic (PLEG): container finished" podID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerID="8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465" exitCode=0 Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.750104 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxgc" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.750084 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxgc" event={"ID":"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e","Type":"ContainerDied","Data":"8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465"} Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.750248 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxgc" event={"ID":"45ef26b9-ba8a-44d4-b46c-2d2e34909e2e","Type":"ContainerDied","Data":"ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb"} Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.750279 4718 scope.go:117] "RemoveContainer" containerID="8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.772565 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxgc"] Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.773413 4718 scope.go:117] "RemoveContainer" containerID="828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.783656 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxgc"] Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.797701 4718 scope.go:117] "RemoveContainer" containerID="f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.847701 4718 scope.go:117] "RemoveContainer" containerID="8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465" Nov 23 15:19:50 crc kubenswrapper[4718]: E1123 15:19:50.848277 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465\": container with ID starting with 8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465 not found: ID does not exist" containerID="8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.848315 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465"} err="failed to get container status \"8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465\": rpc error: code = NotFound desc = could not find container \"8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465\": container with ID starting with 8c7165c8c9100ede6d3362d4c15f05a58b8b0f0a4b1c32a3d50da3db184b6465 not found: ID does not exist" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.848339 4718 scope.go:117] "RemoveContainer" containerID="828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c" Nov 23 15:19:50 crc kubenswrapper[4718]: E1123 15:19:50.848941 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c\": container with ID starting with 828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c not found: ID does not exist" containerID="828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.848959 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c"} err="failed to get container status \"828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c\": rpc error: code = NotFound desc = could not find container \"828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c\": container with ID starting with 828626b968c26d1881f3efe47c4077540f39b38fb605af8a3617890e5bd92e8c not found: ID does not exist" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.848973 4718 scope.go:117] "RemoveContainer" containerID="f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5" Nov 23 15:19:50 crc kubenswrapper[4718]: E1123 15:19:50.849236 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5\": container with ID starting with f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5 not found: ID does not exist" containerID="f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5" Nov 23 15:19:50 crc kubenswrapper[4718]: I1123 15:19:50.849252 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5"} err="failed to get container status \"f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5\": rpc error: code = NotFound desc = could not find container \"f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5\": container with ID starting with f658421283344e02d34972fdb4b2ef48144318954c82d149b02faeae9a9cefd5 not found: ID does not exist" Nov 23 15:19:52 crc kubenswrapper[4718]: I1123 15:19:52.457086 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" path="/var/lib/kubelet/pods/45ef26b9-ba8a-44d4-b46c-2d2e34909e2e/volumes" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.077481 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j7zgr"] Nov 23 15:19:56 crc kubenswrapper[4718]: E1123 15:19:56.078181 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="extract-utilities" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.078195 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="extract-utilities" Nov 23 15:19:56 crc kubenswrapper[4718]: E1123 15:19:56.078218 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="registry-server" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.078224 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="registry-server" Nov 23 15:19:56 crc kubenswrapper[4718]: E1123 15:19:56.078257 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="extract-content" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.078264 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="extract-content" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.078502 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ef26b9-ba8a-44d4-b46c-2d2e34909e2e" containerName="registry-server" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.080665 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.088938 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7zgr"] Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.210552 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-utilities\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.210755 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-catalog-content\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.210784 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptq7z\" (UniqueName: \"kubernetes.io/projected/42c7aa5d-d751-4436-b8c7-4f1d3426828e-kube-api-access-ptq7z\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.313139 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-catalog-content\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.313184 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptq7z\" (UniqueName: \"kubernetes.io/projected/42c7aa5d-d751-4436-b8c7-4f1d3426828e-kube-api-access-ptq7z\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.313261 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-utilities\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.313867 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-utilities\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.315016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-catalog-content\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.349376 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptq7z\" (UniqueName: \"kubernetes.io/projected/42c7aa5d-d751-4436-b8c7-4f1d3426828e-kube-api-access-ptq7z\") pod \"certified-operators-j7zgr\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.440907 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:19:56 crc kubenswrapper[4718]: I1123 15:19:56.962955 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7zgr"] Nov 23 15:19:57 crc kubenswrapper[4718]: I1123 15:19:57.821483 4718 generic.go:334] "Generic (PLEG): container finished" podID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerID="95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545" exitCode=0 Nov 23 15:19:57 crc kubenswrapper[4718]: I1123 15:19:57.821653 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7zgr" event={"ID":"42c7aa5d-d751-4436-b8c7-4f1d3426828e","Type":"ContainerDied","Data":"95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545"} Nov 23 15:19:57 crc kubenswrapper[4718]: I1123 15:19:57.821936 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7zgr" event={"ID":"42c7aa5d-d751-4436-b8c7-4f1d3426828e","Type":"ContainerStarted","Data":"fd7afa79912fb6b0df649b76c53af5812187f28f6402eb8515fca5fd4503519a"} Nov 23 15:19:58 crc kubenswrapper[4718]: E1123 15:19:58.698812 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice/crio-ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice\": RecentStats: unable to find data in memory cache]" Nov 23 15:19:59 crc kubenswrapper[4718]: I1123 15:19:59.841191 4718 generic.go:334] "Generic (PLEG): container finished" podID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerID="cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5" exitCode=0 Nov 23 15:19:59 crc kubenswrapper[4718]: I1123 15:19:59.841517 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7zgr" event={"ID":"42c7aa5d-d751-4436-b8c7-4f1d3426828e","Type":"ContainerDied","Data":"cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5"} Nov 23 15:20:00 crc kubenswrapper[4718]: I1123 15:20:00.853708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7zgr" event={"ID":"42c7aa5d-d751-4436-b8c7-4f1d3426828e","Type":"ContainerStarted","Data":"13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4"} Nov 23 15:20:00 crc kubenswrapper[4718]: I1123 15:20:00.874586 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j7zgr" podStartSLOduration=2.417361103 podStartE2EDuration="4.874563224s" podCreationTimestamp="2025-11-23 15:19:56 +0000 UTC" firstStartedPulling="2025-11-23 15:19:57.823162042 +0000 UTC m=+2049.062781886" lastFinishedPulling="2025-11-23 15:20:00.280364163 +0000 UTC m=+2051.519984007" observedRunningTime="2025-11-23 15:20:00.867479125 +0000 UTC m=+2052.107098979" watchObservedRunningTime="2025-11-23 15:20:00.874563224 +0000 UTC m=+2052.114183068" Nov 23 15:20:06 crc kubenswrapper[4718]: I1123 15:20:06.452396 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:20:06 crc kubenswrapper[4718]: I1123 15:20:06.452723 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:20:06 crc kubenswrapper[4718]: I1123 15:20:06.488917 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:20:06 crc kubenswrapper[4718]: I1123 15:20:06.957850 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:20:07 crc kubenswrapper[4718]: I1123 15:20:07.731571 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7zgr"] Nov 23 15:20:08 crc kubenswrapper[4718]: E1123 15:20:08.930776 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice/crio-ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb\": RecentStats: unable to find data in memory cache]" Nov 23 15:20:08 crc kubenswrapper[4718]: I1123 15:20:08.935955 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j7zgr" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="registry-server" containerID="cri-o://13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4" gracePeriod=2 Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.493332 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.670127 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-catalog-content\") pod \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.670348 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-utilities\") pod \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.670424 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptq7z\" (UniqueName: \"kubernetes.io/projected/42c7aa5d-d751-4436-b8c7-4f1d3426828e-kube-api-access-ptq7z\") pod \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\" (UID: \"42c7aa5d-d751-4436-b8c7-4f1d3426828e\") " Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.673724 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-utilities" (OuterVolumeSpecName: "utilities") pod "42c7aa5d-d751-4436-b8c7-4f1d3426828e" (UID: "42c7aa5d-d751-4436-b8c7-4f1d3426828e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.680164 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c7aa5d-d751-4436-b8c7-4f1d3426828e-kube-api-access-ptq7z" (OuterVolumeSpecName: "kube-api-access-ptq7z") pod "42c7aa5d-d751-4436-b8c7-4f1d3426828e" (UID: "42c7aa5d-d751-4436-b8c7-4f1d3426828e"). InnerVolumeSpecName "kube-api-access-ptq7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.750237 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42c7aa5d-d751-4436-b8c7-4f1d3426828e" (UID: "42c7aa5d-d751-4436-b8c7-4f1d3426828e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.772358 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.772394 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c7aa5d-d751-4436-b8c7-4f1d3426828e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.772410 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptq7z\" (UniqueName: \"kubernetes.io/projected/42c7aa5d-d751-4436-b8c7-4f1d3426828e-kube-api-access-ptq7z\") on node \"crc\" DevicePath \"\"" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.947707 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7zgr" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.947733 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7zgr" event={"ID":"42c7aa5d-d751-4436-b8c7-4f1d3426828e","Type":"ContainerDied","Data":"13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4"} Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.947772 4718 scope.go:117] "RemoveContainer" containerID="13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4" Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.947707 4718 generic.go:334] "Generic (PLEG): container finished" podID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerID="13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4" exitCode=0 Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.947843 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7zgr" event={"ID":"42c7aa5d-d751-4436-b8c7-4f1d3426828e","Type":"ContainerDied","Data":"fd7afa79912fb6b0df649b76c53af5812187f28f6402eb8515fca5fd4503519a"} Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.981085 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7zgr"] Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.990249 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j7zgr"] Nov 23 15:20:09 crc kubenswrapper[4718]: I1123 15:20:09.995906 4718 scope.go:117] "RemoveContainer" containerID="cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.017004 4718 scope.go:117] "RemoveContainer" containerID="95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.057680 4718 scope.go:117] "RemoveContainer" containerID="13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4" Nov 23 15:20:10 crc kubenswrapper[4718]: E1123 15:20:10.058097 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4\": container with ID starting with 13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4 not found: ID does not exist" containerID="13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.058130 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4"} err="failed to get container status \"13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4\": rpc error: code = NotFound desc = could not find container \"13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4\": container with ID starting with 13eaffe6f07fcc94ae1816f573dd5695e0d1c878d3eaf22f397c1f340225b1a4 not found: ID does not exist" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.058152 4718 scope.go:117] "RemoveContainer" containerID="cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5" Nov 23 15:20:10 crc kubenswrapper[4718]: E1123 15:20:10.058555 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5\": container with ID starting with cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5 not found: ID does not exist" containerID="cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.058587 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5"} err="failed to get container status \"cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5\": rpc error: code = NotFound desc = could not find container \"cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5\": container with ID starting with cf8a925f566be2d4df47380273126c8d49ecc2dea912ea3666931be019e173b5 not found: ID does not exist" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.058605 4718 scope.go:117] "RemoveContainer" containerID="95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545" Nov 23 15:20:10 crc kubenswrapper[4718]: E1123 15:20:10.058872 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545\": container with ID starting with 95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545 not found: ID does not exist" containerID="95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.058902 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545"} err="failed to get container status \"95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545\": rpc error: code = NotFound desc = could not find container \"95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545\": container with ID starting with 95e7bee48260430018178924163b442081d84e4e3665900fa20939be0db82545 not found: ID does not exist" Nov 23 15:20:10 crc kubenswrapper[4718]: I1123 15:20:10.456207 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" path="/var/lib/kubelet/pods/42c7aa5d-d751-4436-b8c7-4f1d3426828e/volumes" Nov 23 15:20:19 crc kubenswrapper[4718]: E1123 15:20:19.188285 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice/crio-ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb\": RecentStats: unable to find data in memory cache]" Nov 23 15:20:21 crc kubenswrapper[4718]: I1123 15:20:21.907992 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="da1e3b17-14ea-456e-a694-073e8fd4edaf" containerName="galera" probeResult="failure" output="command timed out" Nov 23 15:20:21 crc kubenswrapper[4718]: I1123 15:20:21.908885 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="da1e3b17-14ea-456e-a694-073e8fd4edaf" containerName="galera" probeResult="failure" output="command timed out" Nov 23 15:20:29 crc kubenswrapper[4718]: E1123 15:20:29.437597 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice/crio-ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice\": RecentStats: unable to find data in memory cache]" Nov 23 15:20:39 crc kubenswrapper[4718]: E1123 15:20:39.679191 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice/crio-ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice\": RecentStats: unable to find data in memory cache]" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.428176 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b4h7x"] Nov 23 15:20:45 crc kubenswrapper[4718]: E1123 15:20:45.429280 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="registry-server" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.429298 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="registry-server" Nov 23 15:20:45 crc kubenswrapper[4718]: E1123 15:20:45.429340 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="extract-content" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.429347 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="extract-content" Nov 23 15:20:45 crc kubenswrapper[4718]: E1123 15:20:45.429362 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="extract-utilities" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.429370 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="extract-utilities" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.429642 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c7aa5d-d751-4436-b8c7-4f1d3426828e" containerName="registry-server" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.431256 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.441735 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4h7x"] Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.575307 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td67v\" (UniqueName: \"kubernetes.io/projected/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-kube-api-access-td67v\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.575452 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-utilities\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.575670 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-catalog-content\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.677250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-utilities\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.677345 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-catalog-content\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.677421 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td67v\" (UniqueName: \"kubernetes.io/projected/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-kube-api-access-td67v\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.678000 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-utilities\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.678043 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-catalog-content\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.696521 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td67v\" (UniqueName: \"kubernetes.io/projected/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-kube-api-access-td67v\") pod \"redhat-operators-b4h7x\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:45 crc kubenswrapper[4718]: I1123 15:20:45.757911 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:20:46 crc kubenswrapper[4718]: I1123 15:20:46.203469 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4h7x"] Nov 23 15:20:46 crc kubenswrapper[4718]: I1123 15:20:46.321896 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4h7x" event={"ID":"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7","Type":"ContainerStarted","Data":"ccd61b909ffb193d7e7f9c49c0b1525680a3bfbfdc4f1e08bca365cbc5231691"} Nov 23 15:20:47 crc kubenswrapper[4718]: I1123 15:20:47.332195 4718 generic.go:334] "Generic (PLEG): container finished" podID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerID="1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526" exitCode=0 Nov 23 15:20:47 crc kubenswrapper[4718]: I1123 15:20:47.332232 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4h7x" event={"ID":"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7","Type":"ContainerDied","Data":"1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526"} Nov 23 15:20:47 crc kubenswrapper[4718]: I1123 15:20:47.335350 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 15:20:48 crc kubenswrapper[4718]: I1123 15:20:48.342539 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4h7x" event={"ID":"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7","Type":"ContainerStarted","Data":"f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6"} Nov 23 15:20:49 crc kubenswrapper[4718]: I1123 15:20:49.354530 4718 generic.go:334] "Generic (PLEG): container finished" podID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerID="f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6" exitCode=0 Nov 23 15:20:49 crc kubenswrapper[4718]: I1123 15:20:49.354660 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4h7x" event={"ID":"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7","Type":"ContainerDied","Data":"f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6"} Nov 23 15:20:49 crc kubenswrapper[4718]: E1123 15:20:49.948408 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice/crio-ce7254cf5bd0902d0e5562086833fecbae054d1e3a3003dc10fdac6ba71849bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef26b9_ba8a_44d4_b46c_2d2e34909e2e.slice\": RecentStats: unable to find data in memory cache]" Nov 23 15:20:51 crc kubenswrapper[4718]: I1123 15:20:51.816344 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r5tmb"] Nov 23 15:20:51 crc kubenswrapper[4718]: I1123 15:20:51.828309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5tmb"] Nov 23 15:20:51 crc kubenswrapper[4718]: I1123 15:20:51.828495 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:51 crc kubenswrapper[4718]: I1123 15:20:51.901275 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-catalog-content\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:51 crc kubenswrapper[4718]: I1123 15:20:51.901372 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjqgh\" (UniqueName: \"kubernetes.io/projected/383d71c6-67cc-47b7-9478-b97d158b475d-kube-api-access-kjqgh\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:51 crc kubenswrapper[4718]: I1123 15:20:51.901407 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-utilities\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:52 crc kubenswrapper[4718]: I1123 15:20:52.003644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-catalog-content\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:52 crc kubenswrapper[4718]: I1123 15:20:52.003753 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjqgh\" (UniqueName: \"kubernetes.io/projected/383d71c6-67cc-47b7-9478-b97d158b475d-kube-api-access-kjqgh\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:52 crc kubenswrapper[4718]: I1123 15:20:52.003807 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-utilities\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:52 crc kubenswrapper[4718]: I1123 15:20:52.004412 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-catalog-content\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:52 crc kubenswrapper[4718]: I1123 15:20:52.004461 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-utilities\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:52 crc kubenswrapper[4718]: I1123 15:20:52.027043 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjqgh\" (UniqueName: \"kubernetes.io/projected/383d71c6-67cc-47b7-9478-b97d158b475d-kube-api-access-kjqgh\") pod \"community-operators-r5tmb\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:52 crc kubenswrapper[4718]: I1123 15:20:52.164858 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:20:56 crc kubenswrapper[4718]: W1123 15:20:56.705563 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod383d71c6_67cc_47b7_9478_b97d158b475d.slice/crio-254a5a58703deb995a12b21859ea8c8396399b597655e1bc22fba0c9d03227ed WatchSource:0}: Error finding container 254a5a58703deb995a12b21859ea8c8396399b597655e1bc22fba0c9d03227ed: Status 404 returned error can't find the container with id 254a5a58703deb995a12b21859ea8c8396399b597655e1bc22fba0c9d03227ed Nov 23 15:20:56 crc kubenswrapper[4718]: I1123 15:20:56.711703 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5tmb"] Nov 23 15:20:57 crc kubenswrapper[4718]: I1123 15:20:57.443283 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4h7x" event={"ID":"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7","Type":"ContainerStarted","Data":"073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab"} Nov 23 15:20:57 crc kubenswrapper[4718]: I1123 15:20:57.447120 4718 generic.go:334] "Generic (PLEG): container finished" podID="383d71c6-67cc-47b7-9478-b97d158b475d" containerID="7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179" exitCode=0 Nov 23 15:20:57 crc kubenswrapper[4718]: I1123 15:20:57.447171 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5tmb" event={"ID":"383d71c6-67cc-47b7-9478-b97d158b475d","Type":"ContainerDied","Data":"7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179"} Nov 23 15:20:57 crc kubenswrapper[4718]: I1123 15:20:57.447199 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5tmb" event={"ID":"383d71c6-67cc-47b7-9478-b97d158b475d","Type":"ContainerStarted","Data":"254a5a58703deb995a12b21859ea8c8396399b597655e1bc22fba0c9d03227ed"} Nov 23 15:20:57 crc kubenswrapper[4718]: I1123 15:20:57.495078 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b4h7x" podStartSLOduration=3.655598474 podStartE2EDuration="12.495053037s" podCreationTimestamp="2025-11-23 15:20:45 +0000 UTC" firstStartedPulling="2025-11-23 15:20:47.335007077 +0000 UTC m=+2098.574626941" lastFinishedPulling="2025-11-23 15:20:56.17446166 +0000 UTC m=+2107.414081504" observedRunningTime="2025-11-23 15:20:57.4681107 +0000 UTC m=+2108.707730534" watchObservedRunningTime="2025-11-23 15:20:57.495053037 +0000 UTC m=+2108.734672881" Nov 23 15:20:59 crc kubenswrapper[4718]: I1123 15:20:59.470396 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5tmb" event={"ID":"383d71c6-67cc-47b7-9478-b97d158b475d","Type":"ContainerStarted","Data":"406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c"} Nov 23 15:21:00 crc kubenswrapper[4718]: I1123 15:21:00.480544 4718 generic.go:334] "Generic (PLEG): container finished" podID="383d71c6-67cc-47b7-9478-b97d158b475d" containerID="406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c" exitCode=0 Nov 23 15:21:00 crc kubenswrapper[4718]: I1123 15:21:00.480620 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5tmb" event={"ID":"383d71c6-67cc-47b7-9478-b97d158b475d","Type":"ContainerDied","Data":"406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c"} Nov 23 15:21:02 crc kubenswrapper[4718]: I1123 15:21:02.499835 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5tmb" event={"ID":"383d71c6-67cc-47b7-9478-b97d158b475d","Type":"ContainerStarted","Data":"4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0"} Nov 23 15:21:02 crc kubenswrapper[4718]: I1123 15:21:02.523728 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r5tmb" podStartSLOduration=7.447307437 podStartE2EDuration="11.523706736s" podCreationTimestamp="2025-11-23 15:20:51 +0000 UTC" firstStartedPulling="2025-11-23 15:20:57.44889137 +0000 UTC m=+2108.688511234" lastFinishedPulling="2025-11-23 15:21:01.525290689 +0000 UTC m=+2112.764910533" observedRunningTime="2025-11-23 15:21:02.518689935 +0000 UTC m=+2113.758309789" watchObservedRunningTime="2025-11-23 15:21:02.523706736 +0000 UTC m=+2113.763326580" Nov 23 15:21:05 crc kubenswrapper[4718]: I1123 15:21:05.758170 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:21:05 crc kubenswrapper[4718]: I1123 15:21:05.758561 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:21:05 crc kubenswrapper[4718]: I1123 15:21:05.817255 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:21:06 crc kubenswrapper[4718]: I1123 15:21:06.584871 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:21:06 crc kubenswrapper[4718]: I1123 15:21:06.635067 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4h7x"] Nov 23 15:21:08 crc kubenswrapper[4718]: I1123 15:21:08.545398 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b4h7x" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="registry-server" containerID="cri-o://073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab" gracePeriod=2 Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.045847 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.113628 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-catalog-content\") pod \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.113782 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td67v\" (UniqueName: \"kubernetes.io/projected/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-kube-api-access-td67v\") pod \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.113898 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-utilities\") pod \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\" (UID: \"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7\") " Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.114789 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-utilities" (OuterVolumeSpecName: "utilities") pod "aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" (UID: "aac06051-f5c4-4215-b1a1-6d7ec76ec1c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.142698 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-kube-api-access-td67v" (OuterVolumeSpecName: "kube-api-access-td67v") pod "aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" (UID: "aac06051-f5c4-4215-b1a1-6d7ec76ec1c7"). InnerVolumeSpecName "kube-api-access-td67v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.218547 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td67v\" (UniqueName: \"kubernetes.io/projected/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-kube-api-access-td67v\") on node \"crc\" DevicePath \"\"" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.218592 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.285697 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" (UID: "aac06051-f5c4-4215-b1a1-6d7ec76ec1c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.319937 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.557886 4718 generic.go:334] "Generic (PLEG): container finished" podID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerID="073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab" exitCode=0 Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.557951 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4h7x" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.557998 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4h7x" event={"ID":"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7","Type":"ContainerDied","Data":"073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab"} Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.559138 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4h7x" event={"ID":"aac06051-f5c4-4215-b1a1-6d7ec76ec1c7","Type":"ContainerDied","Data":"ccd61b909ffb193d7e7f9c49c0b1525680a3bfbfdc4f1e08bca365cbc5231691"} Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.559164 4718 scope.go:117] "RemoveContainer" containerID="073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.588569 4718 scope.go:117] "RemoveContainer" containerID="f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.594829 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4h7x"] Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.603105 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b4h7x"] Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.620197 4718 scope.go:117] "RemoveContainer" containerID="1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.668089 4718 scope.go:117] "RemoveContainer" containerID="073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab" Nov 23 15:21:09 crc kubenswrapper[4718]: E1123 15:21:09.668577 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab\": container with ID starting with 073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab not found: ID does not exist" containerID="073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.668611 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab"} err="failed to get container status \"073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab\": rpc error: code = NotFound desc = could not find container \"073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab\": container with ID starting with 073c336e971cee26bfe8987cf0c135174d4fcdd2c827f3f81293e75c642d29ab not found: ID does not exist" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.668631 4718 scope.go:117] "RemoveContainer" containerID="f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6" Nov 23 15:21:09 crc kubenswrapper[4718]: E1123 15:21:09.669008 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6\": container with ID starting with f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6 not found: ID does not exist" containerID="f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.669034 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6"} err="failed to get container status \"f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6\": rpc error: code = NotFound desc = could not find container \"f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6\": container with ID starting with f7f5e1d8dc112be776e80517fa98fb6779ac9fb0caf629398c7a59acd5cc52d6 not found: ID does not exist" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.669047 4718 scope.go:117] "RemoveContainer" containerID="1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526" Nov 23 15:21:09 crc kubenswrapper[4718]: E1123 15:21:09.669305 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526\": container with ID starting with 1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526 not found: ID does not exist" containerID="1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526" Nov 23 15:21:09 crc kubenswrapper[4718]: I1123 15:21:09.669332 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526"} err="failed to get container status \"1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526\": rpc error: code = NotFound desc = could not find container \"1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526\": container with ID starting with 1204ab7b2ed2068469d191f871da1e7116d7cf64b6d32ede099f182e02a44526 not found: ID does not exist" Nov 23 15:21:10 crc kubenswrapper[4718]: I1123 15:21:10.456762 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" path="/var/lib/kubelet/pods/aac06051-f5c4-4215-b1a1-6d7ec76ec1c7/volumes" Nov 23 15:21:12 crc kubenswrapper[4718]: I1123 15:21:12.165356 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:21:12 crc kubenswrapper[4718]: I1123 15:21:12.167020 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:21:12 crc kubenswrapper[4718]: I1123 15:21:12.222708 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:21:12 crc kubenswrapper[4718]: I1123 15:21:12.636867 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:21:12 crc kubenswrapper[4718]: I1123 15:21:12.690431 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5tmb"] Nov 23 15:21:14 crc kubenswrapper[4718]: I1123 15:21:14.605839 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r5tmb" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="registry-server" containerID="cri-o://4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0" gracePeriod=2 Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.152009 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.176597 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjqgh\" (UniqueName: \"kubernetes.io/projected/383d71c6-67cc-47b7-9478-b97d158b475d-kube-api-access-kjqgh\") pod \"383d71c6-67cc-47b7-9478-b97d158b475d\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.176682 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-catalog-content\") pod \"383d71c6-67cc-47b7-9478-b97d158b475d\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.176786 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-utilities\") pod \"383d71c6-67cc-47b7-9478-b97d158b475d\" (UID: \"383d71c6-67cc-47b7-9478-b97d158b475d\") " Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.177850 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-utilities" (OuterVolumeSpecName: "utilities") pod "383d71c6-67cc-47b7-9478-b97d158b475d" (UID: "383d71c6-67cc-47b7-9478-b97d158b475d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.182522 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383d71c6-67cc-47b7-9478-b97d158b475d-kube-api-access-kjqgh" (OuterVolumeSpecName: "kube-api-access-kjqgh") pod "383d71c6-67cc-47b7-9478-b97d158b475d" (UID: "383d71c6-67cc-47b7-9478-b97d158b475d"). InnerVolumeSpecName "kube-api-access-kjqgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.225594 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "383d71c6-67cc-47b7-9478-b97d158b475d" (UID: "383d71c6-67cc-47b7-9478-b97d158b475d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.279525 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjqgh\" (UniqueName: \"kubernetes.io/projected/383d71c6-67cc-47b7-9478-b97d158b475d-kube-api-access-kjqgh\") on node \"crc\" DevicePath \"\"" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.279555 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.279567 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383d71c6-67cc-47b7-9478-b97d158b475d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.622160 4718 generic.go:334] "Generic (PLEG): container finished" podID="383d71c6-67cc-47b7-9478-b97d158b475d" containerID="4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0" exitCode=0 Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.622220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5tmb" event={"ID":"383d71c6-67cc-47b7-9478-b97d158b475d","Type":"ContainerDied","Data":"4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0"} Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.622290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5tmb" event={"ID":"383d71c6-67cc-47b7-9478-b97d158b475d","Type":"ContainerDied","Data":"254a5a58703deb995a12b21859ea8c8396399b597655e1bc22fba0c9d03227ed"} Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.622296 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5tmb" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.622321 4718 scope.go:117] "RemoveContainer" containerID="4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.656805 4718 scope.go:117] "RemoveContainer" containerID="406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.670285 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5tmb"] Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.685891 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r5tmb"] Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.699645 4718 scope.go:117] "RemoveContainer" containerID="7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.731485 4718 scope.go:117] "RemoveContainer" containerID="4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0" Nov 23 15:21:15 crc kubenswrapper[4718]: E1123 15:21:15.732014 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0\": container with ID starting with 4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0 not found: ID does not exist" containerID="4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.732063 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0"} err="failed to get container status \"4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0\": rpc error: code = NotFound desc = could not find container \"4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0\": container with ID starting with 4384344cbc3561f34bd91d86d7b4c53a2baa02fca8110f7a2b73357f29ad50d0 not found: ID does not exist" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.732093 4718 scope.go:117] "RemoveContainer" containerID="406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c" Nov 23 15:21:15 crc kubenswrapper[4718]: E1123 15:21:15.732397 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c\": container with ID starting with 406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c not found: ID does not exist" containerID="406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.732420 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c"} err="failed to get container status \"406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c\": rpc error: code = NotFound desc = could not find container \"406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c\": container with ID starting with 406fb1686c864fa4a435257e8e3127f11a19a9697ccba39df3143fc14dffa10c not found: ID does not exist" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.732434 4718 scope.go:117] "RemoveContainer" containerID="7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179" Nov 23 15:21:15 crc kubenswrapper[4718]: E1123 15:21:15.732780 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179\": container with ID starting with 7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179 not found: ID does not exist" containerID="7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179" Nov 23 15:21:15 crc kubenswrapper[4718]: I1123 15:21:15.732814 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179"} err="failed to get container status \"7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179\": rpc error: code = NotFound desc = could not find container \"7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179\": container with ID starting with 7f74691da6d1dc7eb6f355cd41b6a0df3a027e681cd768956b9a71dd696e1179 not found: ID does not exist" Nov 23 15:21:16 crc kubenswrapper[4718]: I1123 15:21:16.450371 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" path="/var/lib/kubelet/pods/383d71c6-67cc-47b7-9478-b97d158b475d/volumes" Nov 23 15:21:23 crc kubenswrapper[4718]: I1123 15:21:23.053320 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:21:23 crc kubenswrapper[4718]: I1123 15:21:23.053888 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:21:53 crc kubenswrapper[4718]: I1123 15:21:53.053774 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:21:53 crc kubenswrapper[4718]: I1123 15:21:53.054370 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.053325 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.053948 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.054008 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.054851 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.054906 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" gracePeriod=600 Nov 23 15:22:23 crc kubenswrapper[4718]: E1123 15:22:23.191217 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.246645 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" exitCode=0 Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.246702 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1"} Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.246749 4718 scope.go:117] "RemoveContainer" containerID="71d5542cf179ce80dcc7852dfd81dd988b779960126c8753eb23712fe413bccd" Nov 23 15:22:23 crc kubenswrapper[4718]: I1123 15:22:23.249742 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:22:23 crc kubenswrapper[4718]: E1123 15:22:23.250316 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:22:38 crc kubenswrapper[4718]: I1123 15:22:38.440876 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:22:38 crc kubenswrapper[4718]: E1123 15:22:38.441577 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:22:46 crc kubenswrapper[4718]: I1123 15:22:46.475100 4718 generic.go:334] "Generic (PLEG): container finished" podID="e50f1c92-4d4a-4a83-bf46-c8268c34d373" containerID="84db83cdea50c6248206abbdab682e651bf000fa30da7e60b6de567b2a700e18" exitCode=0 Nov 23 15:22:46 crc kubenswrapper[4718]: I1123 15:22:46.475186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" event={"ID":"e50f1c92-4d4a-4a83-bf46-c8268c34d373","Type":"ContainerDied","Data":"84db83cdea50c6248206abbdab682e651bf000fa30da7e60b6de567b2a700e18"} Nov 23 15:22:47 crc kubenswrapper[4718]: I1123 15:22:47.900675 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.024859 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-secret-0\") pod \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.025164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wln26\" (UniqueName: \"kubernetes.io/projected/e50f1c92-4d4a-4a83-bf46-c8268c34d373-kube-api-access-wln26\") pod \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.025198 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-inventory\") pod \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.025300 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-ssh-key\") pod \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.025378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-combined-ca-bundle\") pod \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\" (UID: \"e50f1c92-4d4a-4a83-bf46-c8268c34d373\") " Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.030504 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e50f1c92-4d4a-4a83-bf46-c8268c34d373" (UID: "e50f1c92-4d4a-4a83-bf46-c8268c34d373"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.032651 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50f1c92-4d4a-4a83-bf46-c8268c34d373-kube-api-access-wln26" (OuterVolumeSpecName: "kube-api-access-wln26") pod "e50f1c92-4d4a-4a83-bf46-c8268c34d373" (UID: "e50f1c92-4d4a-4a83-bf46-c8268c34d373"). InnerVolumeSpecName "kube-api-access-wln26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.055434 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e50f1c92-4d4a-4a83-bf46-c8268c34d373" (UID: "e50f1c92-4d4a-4a83-bf46-c8268c34d373"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.072345 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e50f1c92-4d4a-4a83-bf46-c8268c34d373" (UID: "e50f1c92-4d4a-4a83-bf46-c8268c34d373"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.074736 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-inventory" (OuterVolumeSpecName: "inventory") pod "e50f1c92-4d4a-4a83-bf46-c8268c34d373" (UID: "e50f1c92-4d4a-4a83-bf46-c8268c34d373"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.128263 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.128303 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wln26\" (UniqueName: \"kubernetes.io/projected/e50f1c92-4d4a-4a83-bf46-c8268c34d373-kube-api-access-wln26\") on node \"crc\" DevicePath \"\"" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.128318 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.128328 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.128340 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f1c92-4d4a-4a83-bf46-c8268c34d373-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.498494 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" event={"ID":"e50f1c92-4d4a-4a83-bf46-c8268c34d373","Type":"ContainerDied","Data":"a3838fbb0460ea821105acd3f58a4112bf2664c1c35b8c9086ec2115fff2262d"} Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.498552 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3838fbb0460ea821105acd3f58a4112bf2664c1c35b8c9086ec2115fff2262d" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.498559 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.614495 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp"] Nov 23 15:22:48 crc kubenswrapper[4718]: E1123 15:22:48.615035 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="registry-server" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615060 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="registry-server" Nov 23 15:22:48 crc kubenswrapper[4718]: E1123 15:22:48.615118 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="extract-content" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615130 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="extract-content" Nov 23 15:22:48 crc kubenswrapper[4718]: E1123 15:22:48.615152 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="extract-utilities" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615165 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="extract-utilities" Nov 23 15:22:48 crc kubenswrapper[4718]: E1123 15:22:48.615186 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="extract-utilities" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615198 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="extract-utilities" Nov 23 15:22:48 crc kubenswrapper[4718]: E1123 15:22:48.615219 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="registry-server" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615232 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="registry-server" Nov 23 15:22:48 crc kubenswrapper[4718]: E1123 15:22:48.615256 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50f1c92-4d4a-4a83-bf46-c8268c34d373" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615269 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50f1c92-4d4a-4a83-bf46-c8268c34d373" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 15:22:48 crc kubenswrapper[4718]: E1123 15:22:48.615303 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="extract-content" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615316 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="extract-content" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615652 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="383d71c6-67cc-47b7-9478-b97d158b475d" containerName="registry-server" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615678 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac06051-f5c4-4215-b1a1-6d7ec76ec1c7" containerName="registry-server" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.615710 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50f1c92-4d4a-4a83-bf46-c8268c34d373" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.616700 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.619397 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.620070 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.620107 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.620130 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.620070 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.620209 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.622135 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.623473 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp"] Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739500 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739595 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739647 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739694 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739733 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn5pv\" (UniqueName: \"kubernetes.io/projected/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-kube-api-access-kn5pv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739789 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.739976 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.740127 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.841619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.841686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.841722 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.841763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.841786 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.842321 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.842377 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.842412 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.842456 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn5pv\" (UniqueName: \"kubernetes.io/projected/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-kube-api-access-kn5pv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.844166 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.846672 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.848343 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.848515 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.850400 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.850666 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.858368 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.859850 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.863382 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn5pv\" (UniqueName: \"kubernetes.io/projected/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-kube-api-access-kn5pv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hl7dp\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:48 crc kubenswrapper[4718]: I1123 15:22:48.944617 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:22:49 crc kubenswrapper[4718]: I1123 15:22:49.484311 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp"] Nov 23 15:22:49 crc kubenswrapper[4718]: I1123 15:22:49.513305 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" event={"ID":"b4aa1c8b-75a3-4a8f-98e1-25456c27560f","Type":"ContainerStarted","Data":"202882a72edfa24735afd80a384bbe191c32afd4b1a05a4cece8ec2c79979575"} Nov 23 15:22:50 crc kubenswrapper[4718]: I1123 15:22:50.524871 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" event={"ID":"b4aa1c8b-75a3-4a8f-98e1-25456c27560f","Type":"ContainerStarted","Data":"6622113e39dd708df4bf484b9bdeaa716eecbbfe931d0f49eba61e703acd57d2"} Nov 23 15:22:50 crc kubenswrapper[4718]: I1123 15:22:50.544195 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" podStartSLOduration=2.03821785 podStartE2EDuration="2.544174932s" podCreationTimestamp="2025-11-23 15:22:48 +0000 UTC" firstStartedPulling="2025-11-23 15:22:49.490181025 +0000 UTC m=+2220.729800909" lastFinishedPulling="2025-11-23 15:22:49.996138147 +0000 UTC m=+2221.235757991" observedRunningTime="2025-11-23 15:22:50.541589509 +0000 UTC m=+2221.781209353" watchObservedRunningTime="2025-11-23 15:22:50.544174932 +0000 UTC m=+2221.783794776" Nov 23 15:22:51 crc kubenswrapper[4718]: I1123 15:22:51.441546 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:22:51 crc kubenswrapper[4718]: E1123 15:22:51.442678 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:23:03 crc kubenswrapper[4718]: I1123 15:23:03.442573 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:23:03 crc kubenswrapper[4718]: E1123 15:23:03.443928 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:23:16 crc kubenswrapper[4718]: I1123 15:23:16.440832 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:23:16 crc kubenswrapper[4718]: E1123 15:23:16.441645 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:23:27 crc kubenswrapper[4718]: I1123 15:23:27.441176 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:23:27 crc kubenswrapper[4718]: E1123 15:23:27.441969 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:23:42 crc kubenswrapper[4718]: I1123 15:23:42.441918 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:23:42 crc kubenswrapper[4718]: E1123 15:23:42.443296 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:23:57 crc kubenswrapper[4718]: I1123 15:23:57.442453 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:23:57 crc kubenswrapper[4718]: E1123 15:23:57.443340 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:24:12 crc kubenswrapper[4718]: I1123 15:24:12.441004 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:24:12 crc kubenswrapper[4718]: E1123 15:24:12.441737 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:24:26 crc kubenswrapper[4718]: I1123 15:24:26.441673 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:24:26 crc kubenswrapper[4718]: E1123 15:24:26.442844 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:24:39 crc kubenswrapper[4718]: I1123 15:24:39.441027 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:24:39 crc kubenswrapper[4718]: E1123 15:24:39.441818 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:24:50 crc kubenswrapper[4718]: I1123 15:24:50.448391 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:24:50 crc kubenswrapper[4718]: E1123 15:24:50.449073 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:25:01 crc kubenswrapper[4718]: I1123 15:25:01.441665 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:25:01 crc kubenswrapper[4718]: E1123 15:25:01.442503 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:25:15 crc kubenswrapper[4718]: I1123 15:25:15.441777 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:25:15 crc kubenswrapper[4718]: E1123 15:25:15.443371 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:25:27 crc kubenswrapper[4718]: I1123 15:25:27.440874 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:25:27 crc kubenswrapper[4718]: E1123 15:25:27.441623 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:25:38 crc kubenswrapper[4718]: I1123 15:25:38.312653 4718 generic.go:334] "Generic (PLEG): container finished" podID="b4aa1c8b-75a3-4a8f-98e1-25456c27560f" containerID="6622113e39dd708df4bf484b9bdeaa716eecbbfe931d0f49eba61e703acd57d2" exitCode=0 Nov 23 15:25:38 crc kubenswrapper[4718]: I1123 15:25:38.312736 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" event={"ID":"b4aa1c8b-75a3-4a8f-98e1-25456c27560f","Type":"ContainerDied","Data":"6622113e39dd708df4bf484b9bdeaa716eecbbfe931d0f49eba61e703acd57d2"} Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.743307 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.824975 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-ssh-key\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825080 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-1\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-0\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825228 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-inventory\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825256 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-1\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825301 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn5pv\" (UniqueName: \"kubernetes.io/projected/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-kube-api-access-kn5pv\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825342 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-0\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825387 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-combined-ca-bundle\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.825489 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-extra-config-0\") pod \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\" (UID: \"b4aa1c8b-75a3-4a8f-98e1-25456c27560f\") " Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.835676 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.838132 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-kube-api-access-kn5pv" (OuterVolumeSpecName: "kube-api-access-kn5pv") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "kube-api-access-kn5pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.854769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.855166 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.856922 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.862147 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-inventory" (OuterVolumeSpecName: "inventory") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.866112 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.877472 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.886994 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b4aa1c8b-75a3-4a8f-98e1-25456c27560f" (UID: "b4aa1c8b-75a3-4a8f-98e1-25456c27560f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928526 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928564 4718 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928578 4718 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928591 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928604 4718 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928614 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn5pv\" (UniqueName: \"kubernetes.io/projected/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-kube-api-access-kn5pv\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928628 4718 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928638 4718 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:39 crc kubenswrapper[4718]: I1123 15:25:39.928649 4718 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b4aa1c8b-75a3-4a8f-98e1-25456c27560f-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.334339 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" event={"ID":"b4aa1c8b-75a3-4a8f-98e1-25456c27560f","Type":"ContainerDied","Data":"202882a72edfa24735afd80a384bbe191c32afd4b1a05a4cece8ec2c79979575"} Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.334387 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202882a72edfa24735afd80a384bbe191c32afd4b1a05a4cece8ec2c79979575" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.334514 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hl7dp" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.456611 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68"] Nov 23 15:25:40 crc kubenswrapper[4718]: E1123 15:25:40.457262 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4aa1c8b-75a3-4a8f-98e1-25456c27560f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.457282 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4aa1c8b-75a3-4a8f-98e1-25456c27560f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.457597 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4aa1c8b-75a3-4a8f-98e1-25456c27560f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.458295 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68"] Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.458382 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.460882 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.461247 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.461500 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.461895 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.461971 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-m6ng8" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.538722 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.538960 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.539108 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.539228 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.539424 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.542095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjnzg\" (UniqueName: \"kubernetes.io/projected/34a8617a-3a87-48ad-b752-b324eaac4afe-kube-api-access-kjnzg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.542357 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.644726 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.645035 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.645076 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.645143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.645192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.645214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjnzg\" (UniqueName: \"kubernetes.io/projected/34a8617a-3a87-48ad-b752-b324eaac4afe-kube-api-access-kjnzg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.645271 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.655282 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.655350 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.655369 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.655557 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.658889 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.659371 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.672260 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjnzg\" (UniqueName: \"kubernetes.io/projected/34a8617a-3a87-48ad-b752-b324eaac4afe-kube-api-access-kjnzg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-n6v68\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:40 crc kubenswrapper[4718]: I1123 15:25:40.790709 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:25:41 crc kubenswrapper[4718]: I1123 15:25:41.287892 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68"] Nov 23 15:25:41 crc kubenswrapper[4718]: I1123 15:25:41.346158 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" event={"ID":"34a8617a-3a87-48ad-b752-b324eaac4afe","Type":"ContainerStarted","Data":"997a3f268f7c912553c96514e310436424f0778297388c9580140d418e91cfbd"} Nov 23 15:25:41 crc kubenswrapper[4718]: I1123 15:25:41.441218 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:25:41 crc kubenswrapper[4718]: E1123 15:25:41.441483 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:25:42 crc kubenswrapper[4718]: I1123 15:25:42.355350 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" event={"ID":"34a8617a-3a87-48ad-b752-b324eaac4afe","Type":"ContainerStarted","Data":"693c1ec3007b34884c81d3665a660a55184a94e11b151bb704856470b2d5adc7"} Nov 23 15:25:42 crc kubenswrapper[4718]: I1123 15:25:42.381960 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" podStartSLOduration=1.893747183 podStartE2EDuration="2.381938078s" podCreationTimestamp="2025-11-23 15:25:40 +0000 UTC" firstStartedPulling="2025-11-23 15:25:41.294168931 +0000 UTC m=+2392.533788775" lastFinishedPulling="2025-11-23 15:25:41.782359826 +0000 UTC m=+2393.021979670" observedRunningTime="2025-11-23 15:25:42.372537924 +0000 UTC m=+2393.612157778" watchObservedRunningTime="2025-11-23 15:25:42.381938078 +0000 UTC m=+2393.621557922" Nov 23 15:25:53 crc kubenswrapper[4718]: I1123 15:25:53.441523 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:25:53 crc kubenswrapper[4718]: E1123 15:25:53.442320 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:26:06 crc kubenswrapper[4718]: I1123 15:26:06.441761 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:26:06 crc kubenswrapper[4718]: E1123 15:26:06.442527 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:26:20 crc kubenswrapper[4718]: I1123 15:26:20.785398 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:26:20 crc kubenswrapper[4718]: E1123 15:26:20.786247 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:26:34 crc kubenswrapper[4718]: I1123 15:26:34.441924 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:26:34 crc kubenswrapper[4718]: E1123 15:26:34.442812 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:26:46 crc kubenswrapper[4718]: I1123 15:26:46.442684 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:26:46 crc kubenswrapper[4718]: E1123 15:26:46.446026 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:27:00 crc kubenswrapper[4718]: I1123 15:27:00.447201 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:27:00 crc kubenswrapper[4718]: E1123 15:27:00.448010 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:27:15 crc kubenswrapper[4718]: I1123 15:27:15.441419 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:27:15 crc kubenswrapper[4718]: E1123 15:27:15.442214 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:27:29 crc kubenswrapper[4718]: I1123 15:27:29.441240 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:27:30 crc kubenswrapper[4718]: I1123 15:27:30.466076 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"f1a6f0050160fd8439a472e79a076aad465bfef06eedf01d8fded7481caa3ce4"} Nov 23 15:28:08 crc kubenswrapper[4718]: I1123 15:28:08.857270 4718 generic.go:334] "Generic (PLEG): container finished" podID="34a8617a-3a87-48ad-b752-b324eaac4afe" containerID="693c1ec3007b34884c81d3665a660a55184a94e11b151bb704856470b2d5adc7" exitCode=0 Nov 23 15:28:08 crc kubenswrapper[4718]: I1123 15:28:08.857372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" event={"ID":"34a8617a-3a87-48ad-b752-b324eaac4afe","Type":"ContainerDied","Data":"693c1ec3007b34884c81d3665a660a55184a94e11b151bb704856470b2d5adc7"} Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.291466 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.424218 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-inventory\") pod \"34a8617a-3a87-48ad-b752-b324eaac4afe\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.424281 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-telemetry-combined-ca-bundle\") pod \"34a8617a-3a87-48ad-b752-b324eaac4afe\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.425124 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-0\") pod \"34a8617a-3a87-48ad-b752-b324eaac4afe\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.425207 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-2\") pod \"34a8617a-3a87-48ad-b752-b324eaac4afe\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.425305 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-1\") pod \"34a8617a-3a87-48ad-b752-b324eaac4afe\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.425363 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjnzg\" (UniqueName: \"kubernetes.io/projected/34a8617a-3a87-48ad-b752-b324eaac4afe-kube-api-access-kjnzg\") pod \"34a8617a-3a87-48ad-b752-b324eaac4afe\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.425424 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ssh-key\") pod \"34a8617a-3a87-48ad-b752-b324eaac4afe\" (UID: \"34a8617a-3a87-48ad-b752-b324eaac4afe\") " Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.430763 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a8617a-3a87-48ad-b752-b324eaac4afe-kube-api-access-kjnzg" (OuterVolumeSpecName: "kube-api-access-kjnzg") pod "34a8617a-3a87-48ad-b752-b324eaac4afe" (UID: "34a8617a-3a87-48ad-b752-b324eaac4afe"). InnerVolumeSpecName "kube-api-access-kjnzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.431320 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "34a8617a-3a87-48ad-b752-b324eaac4afe" (UID: "34a8617a-3a87-48ad-b752-b324eaac4afe"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.453547 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "34a8617a-3a87-48ad-b752-b324eaac4afe" (UID: "34a8617a-3a87-48ad-b752-b324eaac4afe"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.461086 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "34a8617a-3a87-48ad-b752-b324eaac4afe" (UID: "34a8617a-3a87-48ad-b752-b324eaac4afe"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.462653 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34a8617a-3a87-48ad-b752-b324eaac4afe" (UID: "34a8617a-3a87-48ad-b752-b324eaac4afe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.473036 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "34a8617a-3a87-48ad-b752-b324eaac4afe" (UID: "34a8617a-3a87-48ad-b752-b324eaac4afe"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.477576 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-inventory" (OuterVolumeSpecName: "inventory") pod "34a8617a-3a87-48ad-b752-b324eaac4afe" (UID: "34a8617a-3a87-48ad-b752-b324eaac4afe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.527763 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.527799 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.527817 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjnzg\" (UniqueName: \"kubernetes.io/projected/34a8617a-3a87-48ad-b752-b324eaac4afe-kube-api-access-kjnzg\") on node \"crc\" DevicePath \"\"" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.527830 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.527844 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.527857 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.527867 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34a8617a-3a87-48ad-b752-b324eaac4afe-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.874133 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" event={"ID":"34a8617a-3a87-48ad-b752-b324eaac4afe","Type":"ContainerDied","Data":"997a3f268f7c912553c96514e310436424f0778297388c9580140d418e91cfbd"} Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.874426 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="997a3f268f7c912553c96514e310436424f0778297388c9580140d418e91cfbd" Nov 23 15:28:10 crc kubenswrapper[4718]: I1123 15:28:10.874176 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-n6v68" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.742416 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 15:29:10 crc kubenswrapper[4718]: E1123 15:29:10.743492 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a8617a-3a87-48ad-b752-b324eaac4afe" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.743519 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a8617a-3a87-48ad-b752-b324eaac4afe" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.743808 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a8617a-3a87-48ad-b752-b324eaac4afe" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.744644 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.747306 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.747321 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.747909 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.748061 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8m9rh" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.754883 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897308 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897360 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897382 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9sf\" (UniqueName: \"kubernetes.io/projected/2053f5ea-ae54-4b1d-951f-2355f69f1062-kube-api-access-fw9sf\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897562 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-config-data\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897589 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897621 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897637 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897659 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.897682 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.999702 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-config-data\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.999763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.999818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.999846 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.999873 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:10 crc kubenswrapper[4718]: I1123 15:29:10.999897 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:10.999975 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.000003 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.000033 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9sf\" (UniqueName: \"kubernetes.io/projected/2053f5ea-ae54-4b1d-951f-2355f69f1062-kube-api-access-fw9sf\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.000514 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.000920 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.001366 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.001422 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.001505 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-config-data\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.007846 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.010149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.020329 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.026868 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.030555 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9sf\" (UniqueName: \"kubernetes.io/projected/2053f5ea-ae54-4b1d-951f-2355f69f1062-kube-api-access-fw9sf\") pod \"tempest-tests-tempest\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.085676 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.553191 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 15:29:11 crc kubenswrapper[4718]: I1123 15:29:11.573108 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 15:29:12 crc kubenswrapper[4718]: I1123 15:29:12.458464 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2053f5ea-ae54-4b1d-951f-2355f69f1062","Type":"ContainerStarted","Data":"7912770b337a0c27cf8daf277ecc665b62b6df8e9c9bccb71604a96f6fb9144e"} Nov 23 15:29:42 crc kubenswrapper[4718]: E1123 15:29:42.977632 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 23 15:29:42 crc kubenswrapper[4718]: E1123 15:29:42.978400 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2053f5ea-ae54-4b1d-951f-2355f69f1062): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:29:42 crc kubenswrapper[4718]: E1123 15:29:42.979607 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2053f5ea-ae54-4b1d-951f-2355f69f1062" Nov 23 15:29:43 crc kubenswrapper[4718]: E1123 15:29:43.773093 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2053f5ea-ae54-4b1d-951f-2355f69f1062" Nov 23 15:29:53 crc kubenswrapper[4718]: I1123 15:29:53.052764 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:29:53 crc kubenswrapper[4718]: I1123 15:29:53.053326 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:29:54 crc kubenswrapper[4718]: I1123 15:29:54.927722 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 15:29:55 crc kubenswrapper[4718]: I1123 15:29:55.876722 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2053f5ea-ae54-4b1d-951f-2355f69f1062","Type":"ContainerStarted","Data":"043ae55e25ed5857f85545535cba3475e54819fa5b1a7dbc1d338d30a63a6ff6"} Nov 23 15:29:55 crc kubenswrapper[4718]: I1123 15:29:55.894221 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.523045407 podStartE2EDuration="46.894207278s" podCreationTimestamp="2025-11-23 15:29:09 +0000 UTC" firstStartedPulling="2025-11-23 15:29:11.552891417 +0000 UTC m=+2602.792511261" lastFinishedPulling="2025-11-23 15:29:54.924053288 +0000 UTC m=+2646.163673132" observedRunningTime="2025-11-23 15:29:55.891751852 +0000 UTC m=+2647.131371696" watchObservedRunningTime="2025-11-23 15:29:55.894207278 +0000 UTC m=+2647.133827122" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.152387 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j"] Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.154387 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.157675 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.167582 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.169503 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j"] Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.288974 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54676f5f-6126-4f8d-9e14-41408612c0bf-secret-volume\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.289045 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqwn\" (UniqueName: \"kubernetes.io/projected/54676f5f-6126-4f8d-9e14-41408612c0bf-kube-api-access-qqqwn\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.289172 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54676f5f-6126-4f8d-9e14-41408612c0bf-config-volume\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.390582 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54676f5f-6126-4f8d-9e14-41408612c0bf-secret-volume\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.390620 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqqwn\" (UniqueName: \"kubernetes.io/projected/54676f5f-6126-4f8d-9e14-41408612c0bf-kube-api-access-qqqwn\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.390672 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54676f5f-6126-4f8d-9e14-41408612c0bf-config-volume\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.391761 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54676f5f-6126-4f8d-9e14-41408612c0bf-config-volume\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.397703 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54676f5f-6126-4f8d-9e14-41408612c0bf-secret-volume\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.410986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqqwn\" (UniqueName: \"kubernetes.io/projected/54676f5f-6126-4f8d-9e14-41408612c0bf-kube-api-access-qqqwn\") pod \"collect-profiles-29398530-sjz7j\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:00 crc kubenswrapper[4718]: I1123 15:30:00.520839 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:01 crc kubenswrapper[4718]: W1123 15:30:01.005152 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54676f5f_6126_4f8d_9e14_41408612c0bf.slice/crio-157c6cb7c7db59d7ed3216cad1b9a53869e7651f0aded2f62aafc5431d32f284 WatchSource:0}: Error finding container 157c6cb7c7db59d7ed3216cad1b9a53869e7651f0aded2f62aafc5431d32f284: Status 404 returned error can't find the container with id 157c6cb7c7db59d7ed3216cad1b9a53869e7651f0aded2f62aafc5431d32f284 Nov 23 15:30:01 crc kubenswrapper[4718]: I1123 15:30:01.013077 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j"] Nov 23 15:30:01 crc kubenswrapper[4718]: I1123 15:30:01.934543 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" event={"ID":"54676f5f-6126-4f8d-9e14-41408612c0bf","Type":"ContainerStarted","Data":"2ac1a3c88780eac70cd7e137f0fca9459c1951d94235f5ef10f2c744f64ee84e"} Nov 23 15:30:01 crc kubenswrapper[4718]: I1123 15:30:01.934881 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" event={"ID":"54676f5f-6126-4f8d-9e14-41408612c0bf","Type":"ContainerStarted","Data":"157c6cb7c7db59d7ed3216cad1b9a53869e7651f0aded2f62aafc5431d32f284"} Nov 23 15:30:07 crc kubenswrapper[4718]: I1123 15:30:07.016249 4718 generic.go:334] "Generic (PLEG): container finished" podID="54676f5f-6126-4f8d-9e14-41408612c0bf" containerID="2ac1a3c88780eac70cd7e137f0fca9459c1951d94235f5ef10f2c744f64ee84e" exitCode=0 Nov 23 15:30:07 crc kubenswrapper[4718]: I1123 15:30:07.016320 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" event={"ID":"54676f5f-6126-4f8d-9e14-41408612c0bf","Type":"ContainerDied","Data":"2ac1a3c88780eac70cd7e137f0fca9459c1951d94235f5ef10f2c744f64ee84e"} Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.348009 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.444967 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54676f5f-6126-4f8d-9e14-41408612c0bf-config-volume\") pod \"54676f5f-6126-4f8d-9e14-41408612c0bf\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.445104 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqqwn\" (UniqueName: \"kubernetes.io/projected/54676f5f-6126-4f8d-9e14-41408612c0bf-kube-api-access-qqqwn\") pod \"54676f5f-6126-4f8d-9e14-41408612c0bf\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.445186 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54676f5f-6126-4f8d-9e14-41408612c0bf-secret-volume\") pod \"54676f5f-6126-4f8d-9e14-41408612c0bf\" (UID: \"54676f5f-6126-4f8d-9e14-41408612c0bf\") " Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.445816 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54676f5f-6126-4f8d-9e14-41408612c0bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "54676f5f-6126-4f8d-9e14-41408612c0bf" (UID: "54676f5f-6126-4f8d-9e14-41408612c0bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.450662 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54676f5f-6126-4f8d-9e14-41408612c0bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54676f5f-6126-4f8d-9e14-41408612c0bf" (UID: "54676f5f-6126-4f8d-9e14-41408612c0bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.450669 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54676f5f-6126-4f8d-9e14-41408612c0bf-kube-api-access-qqqwn" (OuterVolumeSpecName: "kube-api-access-qqqwn") pod "54676f5f-6126-4f8d-9e14-41408612c0bf" (UID: "54676f5f-6126-4f8d-9e14-41408612c0bf"). InnerVolumeSpecName "kube-api-access-qqqwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.546898 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54676f5f-6126-4f8d-9e14-41408612c0bf-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.546925 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqqwn\" (UniqueName: \"kubernetes.io/projected/54676f5f-6126-4f8d-9e14-41408612c0bf-kube-api-access-qqqwn\") on node \"crc\" DevicePath \"\"" Nov 23 15:30:08 crc kubenswrapper[4718]: I1123 15:30:08.546938 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54676f5f-6126-4f8d-9e14-41408612c0bf-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:30:09 crc kubenswrapper[4718]: I1123 15:30:09.035480 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" event={"ID":"54676f5f-6126-4f8d-9e14-41408612c0bf","Type":"ContainerDied","Data":"157c6cb7c7db59d7ed3216cad1b9a53869e7651f0aded2f62aafc5431d32f284"} Nov 23 15:30:09 crc kubenswrapper[4718]: I1123 15:30:09.035534 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="157c6cb7c7db59d7ed3216cad1b9a53869e7651f0aded2f62aafc5431d32f284" Nov 23 15:30:09 crc kubenswrapper[4718]: I1123 15:30:09.035557 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398530-sjz7j" Nov 23 15:30:09 crc kubenswrapper[4718]: I1123 15:30:09.421769 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c"] Nov 23 15:30:09 crc kubenswrapper[4718]: I1123 15:30:09.431518 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398485-2272c"] Nov 23 15:30:10 crc kubenswrapper[4718]: I1123 15:30:10.454562 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f55fc0-e04d-4f3a-8869-80cbb53c26ee" path="/var/lib/kubelet/pods/03f55fc0-e04d-4f3a-8869-80cbb53c26ee/volumes" Nov 23 15:30:23 crc kubenswrapper[4718]: I1123 15:30:23.052828 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:30:23 crc kubenswrapper[4718]: I1123 15:30:23.053390 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.273496 4718 scope.go:117] "RemoveContainer" containerID="91b22b238a43688558fa254177d0b46f77b76822e841892653cadc58961a9f8f" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.412807 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k6x2x"] Nov 23 15:30:45 crc kubenswrapper[4718]: E1123 15:30:45.413203 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54676f5f-6126-4f8d-9e14-41408612c0bf" containerName="collect-profiles" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.413216 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="54676f5f-6126-4f8d-9e14-41408612c0bf" containerName="collect-profiles" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.413426 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="54676f5f-6126-4f8d-9e14-41408612c0bf" containerName="collect-profiles" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.415097 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.426227 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6x2x"] Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.486831 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-catalog-content\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.486986 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6q7q\" (UniqueName: \"kubernetes.io/projected/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-kube-api-access-c6q7q\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.487048 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-utilities\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.588795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-utilities\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.589181 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-catalog-content\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.589301 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6q7q\" (UniqueName: \"kubernetes.io/projected/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-kube-api-access-c6q7q\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.589319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-utilities\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.589618 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-catalog-content\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.609826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6q7q\" (UniqueName: \"kubernetes.io/projected/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-kube-api-access-c6q7q\") pod \"certified-operators-k6x2x\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:45 crc kubenswrapper[4718]: I1123 15:30:45.747889 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:46 crc kubenswrapper[4718]: I1123 15:30:46.292928 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6x2x"] Nov 23 15:30:46 crc kubenswrapper[4718]: I1123 15:30:46.376697 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6x2x" event={"ID":"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb","Type":"ContainerStarted","Data":"1a8b7864e1e4016253fccbef538f654268a5fe57cdccbcdbfccee83849150c4a"} Nov 23 15:30:47 crc kubenswrapper[4718]: I1123 15:30:47.391008 4718 generic.go:334] "Generic (PLEG): container finished" podID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerID="e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212" exitCode=0 Nov 23 15:30:47 crc kubenswrapper[4718]: I1123 15:30:47.391202 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6x2x" event={"ID":"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb","Type":"ContainerDied","Data":"e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212"} Nov 23 15:30:48 crc kubenswrapper[4718]: I1123 15:30:48.403545 4718 generic.go:334] "Generic (PLEG): container finished" podID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerID="30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3" exitCode=0 Nov 23 15:30:48 crc kubenswrapper[4718]: I1123 15:30:48.403601 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6x2x" event={"ID":"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb","Type":"ContainerDied","Data":"30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3"} Nov 23 15:30:49 crc kubenswrapper[4718]: I1123 15:30:49.440151 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6x2x" event={"ID":"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb","Type":"ContainerStarted","Data":"3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2"} Nov 23 15:30:49 crc kubenswrapper[4718]: I1123 15:30:49.463238 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k6x2x" podStartSLOduration=3.048143038 podStartE2EDuration="4.463221806s" podCreationTimestamp="2025-11-23 15:30:45 +0000 UTC" firstStartedPulling="2025-11-23 15:30:47.394060743 +0000 UTC m=+2698.633680587" lastFinishedPulling="2025-11-23 15:30:48.809139511 +0000 UTC m=+2700.048759355" observedRunningTime="2025-11-23 15:30:49.460852854 +0000 UTC m=+2700.700472718" watchObservedRunningTime="2025-11-23 15:30:49.463221806 +0000 UTC m=+2700.702841640" Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.053624 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.054124 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.054177 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.055023 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1a6f0050160fd8439a472e79a076aad465bfef06eedf01d8fded7481caa3ce4"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.055086 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://f1a6f0050160fd8439a472e79a076aad465bfef06eedf01d8fded7481caa3ce4" gracePeriod=600 Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.486765 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="f1a6f0050160fd8439a472e79a076aad465bfef06eedf01d8fded7481caa3ce4" exitCode=0 Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.486842 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"f1a6f0050160fd8439a472e79a076aad465bfef06eedf01d8fded7481caa3ce4"} Nov 23 15:30:53 crc kubenswrapper[4718]: I1123 15:30:53.487291 4718 scope.go:117] "RemoveContainer" containerID="bc141d953b76621ee52e79c11fff9f0cbe85d97fbd9bd27a8eaa586237240cf1" Nov 23 15:30:54 crc kubenswrapper[4718]: I1123 15:30:54.498658 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8"} Nov 23 15:30:55 crc kubenswrapper[4718]: I1123 15:30:55.748620 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:55 crc kubenswrapper[4718]: I1123 15:30:55.748884 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:55 crc kubenswrapper[4718]: I1123 15:30:55.812788 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:56 crc kubenswrapper[4718]: I1123 15:30:56.571463 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:56 crc kubenswrapper[4718]: I1123 15:30:56.620991 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6x2x"] Nov 23 15:30:58 crc kubenswrapper[4718]: I1123 15:30:58.538312 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k6x2x" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="registry-server" containerID="cri-o://3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2" gracePeriod=2 Nov 23 15:30:58 crc kubenswrapper[4718]: I1123 15:30:58.884137 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98mws"] Nov 23 15:30:58 crc kubenswrapper[4718]: I1123 15:30:58.886815 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:58 crc kubenswrapper[4718]: I1123 15:30:58.905198 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98mws"] Nov 23 15:30:58 crc kubenswrapper[4718]: I1123 15:30:58.940944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-catalog-content\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:58 crc kubenswrapper[4718]: I1123 15:30:58.941008 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-utilities\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:58 crc kubenswrapper[4718]: I1123 15:30:58.941030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q922\" (UniqueName: \"kubernetes.io/projected/120cc5bc-7f95-4a30-82c5-e0332df32e74-kube-api-access-5q922\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.042910 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-catalog-content\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.042996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-utilities\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.043019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q922\" (UniqueName: \"kubernetes.io/projected/120cc5bc-7f95-4a30-82c5-e0332df32e74-kube-api-access-5q922\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.043592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-utilities\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.043932 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-catalog-content\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.070826 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.071844 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q922\" (UniqueName: \"kubernetes.io/projected/120cc5bc-7f95-4a30-82c5-e0332df32e74-kube-api-access-5q922\") pod \"redhat-marketplace-98mws\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.144492 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-catalog-content\") pod \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.144588 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6q7q\" (UniqueName: \"kubernetes.io/projected/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-kube-api-access-c6q7q\") pod \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.144668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-utilities\") pod \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\" (UID: \"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb\") " Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.146039 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-utilities" (OuterVolumeSpecName: "utilities") pod "e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" (UID: "e9c34138-95dc-42c3-bd2b-1b7bd19b15fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.152134 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-kube-api-access-c6q7q" (OuterVolumeSpecName: "kube-api-access-c6q7q") pod "e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" (UID: "e9c34138-95dc-42c3-bd2b-1b7bd19b15fb"). InnerVolumeSpecName "kube-api-access-c6q7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.223286 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.246820 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6q7q\" (UniqueName: \"kubernetes.io/projected/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-kube-api-access-c6q7q\") on node \"crc\" DevicePath \"\"" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.246849 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.547828 4718 generic.go:334] "Generic (PLEG): container finished" podID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerID="3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2" exitCode=0 Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.547886 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6x2x" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.547905 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6x2x" event={"ID":"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb","Type":"ContainerDied","Data":"3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2"} Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.548291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6x2x" event={"ID":"e9c34138-95dc-42c3-bd2b-1b7bd19b15fb","Type":"ContainerDied","Data":"1a8b7864e1e4016253fccbef538f654268a5fe57cdccbcdbfccee83849150c4a"} Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.548317 4718 scope.go:117] "RemoveContainer" containerID="3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.568242 4718 scope.go:117] "RemoveContainer" containerID="30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.586633 4718 scope.go:117] "RemoveContainer" containerID="e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.605689 4718 scope.go:117] "RemoveContainer" containerID="3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2" Nov 23 15:30:59 crc kubenswrapper[4718]: E1123 15:30:59.606113 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2\": container with ID starting with 3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2 not found: ID does not exist" containerID="3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.606228 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2"} err="failed to get container status \"3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2\": rpc error: code = NotFound desc = could not find container \"3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2\": container with ID starting with 3dc9c6c7b7c6dba8fc0e6bbc9e481c9fdf9cb6c2e4931e804b32959070f6baf2 not found: ID does not exist" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.606312 4718 scope.go:117] "RemoveContainer" containerID="30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3" Nov 23 15:30:59 crc kubenswrapper[4718]: E1123 15:30:59.606691 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3\": container with ID starting with 30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3 not found: ID does not exist" containerID="30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.606721 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3"} err="failed to get container status \"30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3\": rpc error: code = NotFound desc = could not find container \"30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3\": container with ID starting with 30e5bd1ce40f76f77f9c39c9d77dcab8b76112bb29769802894fbe21e7d362e3 not found: ID does not exist" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.606740 4718 scope.go:117] "RemoveContainer" containerID="e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212" Nov 23 15:30:59 crc kubenswrapper[4718]: E1123 15:30:59.607010 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212\": container with ID starting with e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212 not found: ID does not exist" containerID="e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.607059 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212"} err="failed to get container status \"e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212\": rpc error: code = NotFound desc = could not find container \"e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212\": container with ID starting with e5d56a88d0669fd1fbe752cfd5a3f78c501fb01d8ef2e62c3ab62b6677898212 not found: ID does not exist" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.682585 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98mws"] Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.857139 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" (UID: "e9c34138-95dc-42c3-bd2b-1b7bd19b15fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:30:59 crc kubenswrapper[4718]: I1123 15:30:59.958647 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:00 crc kubenswrapper[4718]: I1123 15:31:00.208031 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6x2x"] Nov 23 15:31:00 crc kubenswrapper[4718]: I1123 15:31:00.216237 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k6x2x"] Nov 23 15:31:00 crc kubenswrapper[4718]: I1123 15:31:00.458379 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" path="/var/lib/kubelet/pods/e9c34138-95dc-42c3-bd2b-1b7bd19b15fb/volumes" Nov 23 15:31:00 crc kubenswrapper[4718]: I1123 15:31:00.563642 4718 generic.go:334] "Generic (PLEG): container finished" podID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerID="f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200" exitCode=0 Nov 23 15:31:00 crc kubenswrapper[4718]: I1123 15:31:00.563708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98mws" event={"ID":"120cc5bc-7f95-4a30-82c5-e0332df32e74","Type":"ContainerDied","Data":"f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200"} Nov 23 15:31:00 crc kubenswrapper[4718]: I1123 15:31:00.563748 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98mws" event={"ID":"120cc5bc-7f95-4a30-82c5-e0332df32e74","Type":"ContainerStarted","Data":"3de2afbc9b511a2ffef94118f0c92bc7c8557d438ee088759c6e60ad29a9fb0b"} Nov 23 15:31:02 crc kubenswrapper[4718]: I1123 15:31:02.597175 4718 generic.go:334] "Generic (PLEG): container finished" podID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerID="c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f" exitCode=0 Nov 23 15:31:02 crc kubenswrapper[4718]: I1123 15:31:02.597290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98mws" event={"ID":"120cc5bc-7f95-4a30-82c5-e0332df32e74","Type":"ContainerDied","Data":"c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f"} Nov 23 15:31:03 crc kubenswrapper[4718]: I1123 15:31:03.611824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98mws" event={"ID":"120cc5bc-7f95-4a30-82c5-e0332df32e74","Type":"ContainerStarted","Data":"5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee"} Nov 23 15:31:03 crc kubenswrapper[4718]: I1123 15:31:03.636702 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98mws" podStartSLOduration=3.128108714 podStartE2EDuration="5.636675116s" podCreationTimestamp="2025-11-23 15:30:58 +0000 UTC" firstStartedPulling="2025-11-23 15:31:00.566781456 +0000 UTC m=+2711.806401330" lastFinishedPulling="2025-11-23 15:31:03.075347888 +0000 UTC m=+2714.314967732" observedRunningTime="2025-11-23 15:31:03.63046952 +0000 UTC m=+2714.870089384" watchObservedRunningTime="2025-11-23 15:31:03.636675116 +0000 UTC m=+2714.876294980" Nov 23 15:31:09 crc kubenswrapper[4718]: I1123 15:31:09.223470 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:31:09 crc kubenswrapper[4718]: I1123 15:31:09.224026 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:31:09 crc kubenswrapper[4718]: I1123 15:31:09.270512 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:31:09 crc kubenswrapper[4718]: I1123 15:31:09.722819 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:31:10 crc kubenswrapper[4718]: I1123 15:31:10.850407 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98mws"] Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.059685 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7drk8"] Nov 23 15:31:11 crc kubenswrapper[4718]: E1123 15:31:11.060163 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="extract-utilities" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.060198 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="extract-utilities" Nov 23 15:31:11 crc kubenswrapper[4718]: E1123 15:31:11.060257 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="extract-content" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.060269 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="extract-content" Nov 23 15:31:11 crc kubenswrapper[4718]: E1123 15:31:11.060284 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="registry-server" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.060293 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="registry-server" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.061801 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c34138-95dc-42c3-bd2b-1b7bd19b15fb" containerName="registry-server" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.063321 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.070011 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7drk8"] Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.258543 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-catalog-content\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.258738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-utilities\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.258842 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-249zn\" (UniqueName: \"kubernetes.io/projected/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-kube-api-access-249zn\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.360147 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-catalog-content\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.360280 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-utilities\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.360371 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-249zn\" (UniqueName: \"kubernetes.io/projected/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-kube-api-access-249zn\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.361255 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-catalog-content\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.361580 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-utilities\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.387125 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-249zn\" (UniqueName: \"kubernetes.io/projected/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-kube-api-access-249zn\") pod \"community-operators-7drk8\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.398921 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.691872 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98mws" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="registry-server" containerID="cri-o://5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee" gracePeriod=2 Nov 23 15:31:11 crc kubenswrapper[4718]: I1123 15:31:11.877316 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7drk8"] Nov 23 15:31:11 crc kubenswrapper[4718]: W1123 15:31:11.897290 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc1ca8fc_ccb3_4eba_b542_8feeb35a383a.slice/crio-a94ebe31ddb155c726902065d71afd3d31787fa963a319a3d0673eb6cfe5377b WatchSource:0}: Error finding container a94ebe31ddb155c726902065d71afd3d31787fa963a319a3d0673eb6cfe5377b: Status 404 returned error can't find the container with id a94ebe31ddb155c726902065d71afd3d31787fa963a319a3d0673eb6cfe5377b Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.149551 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.302792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-catalog-content\") pod \"120cc5bc-7f95-4a30-82c5-e0332df32e74\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.303054 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q922\" (UniqueName: \"kubernetes.io/projected/120cc5bc-7f95-4a30-82c5-e0332df32e74-kube-api-access-5q922\") pod \"120cc5bc-7f95-4a30-82c5-e0332df32e74\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.303108 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-utilities\") pod \"120cc5bc-7f95-4a30-82c5-e0332df32e74\" (UID: \"120cc5bc-7f95-4a30-82c5-e0332df32e74\") " Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.304502 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-utilities" (OuterVolumeSpecName: "utilities") pod "120cc5bc-7f95-4a30-82c5-e0332df32e74" (UID: "120cc5bc-7f95-4a30-82c5-e0332df32e74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.310418 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120cc5bc-7f95-4a30-82c5-e0332df32e74-kube-api-access-5q922" (OuterVolumeSpecName: "kube-api-access-5q922") pod "120cc5bc-7f95-4a30-82c5-e0332df32e74" (UID: "120cc5bc-7f95-4a30-82c5-e0332df32e74"). InnerVolumeSpecName "kube-api-access-5q922". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.330773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "120cc5bc-7f95-4a30-82c5-e0332df32e74" (UID: "120cc5bc-7f95-4a30-82c5-e0332df32e74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.405678 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q922\" (UniqueName: \"kubernetes.io/projected/120cc5bc-7f95-4a30-82c5-e0332df32e74-kube-api-access-5q922\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.405724 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.405740 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120cc5bc-7f95-4a30-82c5-e0332df32e74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.702488 4718 generic.go:334] "Generic (PLEG): container finished" podID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerID="5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee" exitCode=0 Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.702558 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98mws" event={"ID":"120cc5bc-7f95-4a30-82c5-e0332df32e74","Type":"ContainerDied","Data":"5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee"} Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.702590 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98mws" event={"ID":"120cc5bc-7f95-4a30-82c5-e0332df32e74","Type":"ContainerDied","Data":"3de2afbc9b511a2ffef94118f0c92bc7c8557d438ee088759c6e60ad29a9fb0b"} Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.702609 4718 scope.go:117] "RemoveContainer" containerID="5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.702792 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98mws" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.704967 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerID="4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c" exitCode=0 Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.706262 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7drk8" event={"ID":"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a","Type":"ContainerDied","Data":"4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c"} Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.706322 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7drk8" event={"ID":"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a","Type":"ContainerStarted","Data":"a94ebe31ddb155c726902065d71afd3d31787fa963a319a3d0673eb6cfe5377b"} Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.734190 4718 scope.go:117] "RemoveContainer" containerID="c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.755656 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98mws"] Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.762459 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98mws"] Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.766087 4718 scope.go:117] "RemoveContainer" containerID="f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.811474 4718 scope.go:117] "RemoveContainer" containerID="5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee" Nov 23 15:31:12 crc kubenswrapper[4718]: E1123 15:31:12.812462 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee\": container with ID starting with 5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee not found: ID does not exist" containerID="5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.812493 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee"} err="failed to get container status \"5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee\": rpc error: code = NotFound desc = could not find container \"5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee\": container with ID starting with 5e1a0a2a41fbcaa9985919a29ac5a115a827330c2bf276aec42a876fb9b7a1ee not found: ID does not exist" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.812515 4718 scope.go:117] "RemoveContainer" containerID="c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f" Nov 23 15:31:12 crc kubenswrapper[4718]: E1123 15:31:12.813384 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f\": container with ID starting with c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f not found: ID does not exist" containerID="c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.813407 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f"} err="failed to get container status \"c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f\": rpc error: code = NotFound desc = could not find container \"c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f\": container with ID starting with c794f8efe70ea8a6779af5373889358c4ed93b705fbeabb5fa3a0ee787daa94f not found: ID does not exist" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.813420 4718 scope.go:117] "RemoveContainer" containerID="f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200" Nov 23 15:31:12 crc kubenswrapper[4718]: E1123 15:31:12.814395 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200\": container with ID starting with f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200 not found: ID does not exist" containerID="f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200" Nov 23 15:31:12 crc kubenswrapper[4718]: I1123 15:31:12.814820 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200"} err="failed to get container status \"f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200\": rpc error: code = NotFound desc = could not find container \"f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200\": container with ID starting with f50e7385aa1f09c132221a56cc11e67ac7505ab9877e524cfb17e1a6db1ac200 not found: ID does not exist" Nov 23 15:31:13 crc kubenswrapper[4718]: I1123 15:31:13.717027 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7drk8" event={"ID":"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a","Type":"ContainerStarted","Data":"204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52"} Nov 23 15:31:14 crc kubenswrapper[4718]: I1123 15:31:14.499822 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" path="/var/lib/kubelet/pods/120cc5bc-7f95-4a30-82c5-e0332df32e74/volumes" Nov 23 15:31:14 crc kubenswrapper[4718]: I1123 15:31:14.726886 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerID="204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52" exitCode=0 Nov 23 15:31:14 crc kubenswrapper[4718]: I1123 15:31:14.726928 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7drk8" event={"ID":"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a","Type":"ContainerDied","Data":"204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52"} Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.660568 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m8vtn"] Nov 23 15:31:15 crc kubenswrapper[4718]: E1123 15:31:15.661375 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="extract-utilities" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.661402 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="extract-utilities" Nov 23 15:31:15 crc kubenswrapper[4718]: E1123 15:31:15.661424 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="extract-content" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.661433 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="extract-content" Nov 23 15:31:15 crc kubenswrapper[4718]: E1123 15:31:15.661490 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="registry-server" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.661500 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="registry-server" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.661744 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="120cc5bc-7f95-4a30-82c5-e0332df32e74" containerName="registry-server" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.663484 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.675734 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5k4v\" (UniqueName: \"kubernetes.io/projected/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-kube-api-access-z5k4v\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.675901 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-utilities\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.676153 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-catalog-content\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.677769 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8vtn"] Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.742003 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7drk8" event={"ID":"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a","Type":"ContainerStarted","Data":"5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049"} Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.760885 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7drk8" podStartSLOduration=2.298103874 podStartE2EDuration="4.760853769s" podCreationTimestamp="2025-11-23 15:31:11 +0000 UTC" firstStartedPulling="2025-11-23 15:31:12.707773184 +0000 UTC m=+2723.947393028" lastFinishedPulling="2025-11-23 15:31:15.170523079 +0000 UTC m=+2726.410142923" observedRunningTime="2025-11-23 15:31:15.760219024 +0000 UTC m=+2726.999838868" watchObservedRunningTime="2025-11-23 15:31:15.760853769 +0000 UTC m=+2727.000473613" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.778668 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-catalog-content\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.778811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5k4v\" (UniqueName: \"kubernetes.io/projected/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-kube-api-access-z5k4v\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.778918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-utilities\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.779412 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-catalog-content\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.779815 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-utilities\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.819003 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5k4v\" (UniqueName: \"kubernetes.io/projected/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-kube-api-access-z5k4v\") pod \"redhat-operators-m8vtn\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:15 crc kubenswrapper[4718]: I1123 15:31:15.982427 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:16 crc kubenswrapper[4718]: I1123 15:31:16.509124 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8vtn"] Nov 23 15:31:16 crc kubenswrapper[4718]: I1123 15:31:16.756007 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerStarted","Data":"6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5"} Nov 23 15:31:16 crc kubenswrapper[4718]: I1123 15:31:16.756418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerStarted","Data":"94e024f4fda5ad8034288f457d1388e17ea7cb237dcdf6660e147b848594f79e"} Nov 23 15:31:17 crc kubenswrapper[4718]: I1123 15:31:17.777964 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerID="6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5" exitCode=0 Nov 23 15:31:17 crc kubenswrapper[4718]: I1123 15:31:17.778295 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerDied","Data":"6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5"} Nov 23 15:31:19 crc kubenswrapper[4718]: I1123 15:31:19.863538 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerStarted","Data":"e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2"} Nov 23 15:31:20 crc kubenswrapper[4718]: I1123 15:31:20.882633 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerID="e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2" exitCode=0 Nov 23 15:31:20 crc kubenswrapper[4718]: I1123 15:31:20.882716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerDied","Data":"e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2"} Nov 23 15:31:21 crc kubenswrapper[4718]: I1123 15:31:21.399758 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:21 crc kubenswrapper[4718]: I1123 15:31:21.399811 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:21 crc kubenswrapper[4718]: I1123 15:31:21.468994 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:21 crc kubenswrapper[4718]: I1123 15:31:21.951627 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:24 crc kubenswrapper[4718]: I1123 15:31:24.927308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerStarted","Data":"6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05"} Nov 23 15:31:24 crc kubenswrapper[4718]: I1123 15:31:24.949210 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m8vtn" podStartSLOduration=3.546592985 podStartE2EDuration="9.949191883s" podCreationTimestamp="2025-11-23 15:31:15 +0000 UTC" firstStartedPulling="2025-11-23 15:31:17.780320069 +0000 UTC m=+2729.019939913" lastFinishedPulling="2025-11-23 15:31:24.182918947 +0000 UTC m=+2735.422538811" observedRunningTime="2025-11-23 15:31:24.944813854 +0000 UTC m=+2736.184433738" watchObservedRunningTime="2025-11-23 15:31:24.949191883 +0000 UTC m=+2736.188811727" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.250418 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7drk8"] Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.250845 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7drk8" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="registry-server" containerID="cri-o://5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049" gracePeriod=2 Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.743703 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.771566 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249zn\" (UniqueName: \"kubernetes.io/projected/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-kube-api-access-249zn\") pod \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.771729 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-catalog-content\") pod \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.771813 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-utilities\") pod \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\" (UID: \"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a\") " Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.773794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-utilities" (OuterVolumeSpecName: "utilities") pod "bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" (UID: "bc1ca8fc-ccb3-4eba-b542-8feeb35a383a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.784903 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-kube-api-access-249zn" (OuterVolumeSpecName: "kube-api-access-249zn") pod "bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" (UID: "bc1ca8fc-ccb3-4eba-b542-8feeb35a383a"). InnerVolumeSpecName "kube-api-access-249zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.864941 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" (UID: "bc1ca8fc-ccb3-4eba-b542-8feeb35a383a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.874018 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249zn\" (UniqueName: \"kubernetes.io/projected/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-kube-api-access-249zn\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.874063 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.874077 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.936971 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerID="5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049" exitCode=0 Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.937002 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7drk8" event={"ID":"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a","Type":"ContainerDied","Data":"5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049"} Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.937035 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7drk8" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.937052 4718 scope.go:117] "RemoveContainer" containerID="5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.937041 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7drk8" event={"ID":"bc1ca8fc-ccb3-4eba-b542-8feeb35a383a","Type":"ContainerDied","Data":"a94ebe31ddb155c726902065d71afd3d31787fa963a319a3d0673eb6cfe5377b"} Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.964999 4718 scope.go:117] "RemoveContainer" containerID="204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.985089 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.985143 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:25 crc kubenswrapper[4718]: I1123 15:31:25.996672 4718 scope.go:117] "RemoveContainer" containerID="4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c" Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:25.999971 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7drk8"] Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.015087 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7drk8"] Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.036461 4718 scope.go:117] "RemoveContainer" containerID="5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049" Nov 23 15:31:26 crc kubenswrapper[4718]: E1123 15:31:26.037018 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049\": container with ID starting with 5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049 not found: ID does not exist" containerID="5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049" Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.037059 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049"} err="failed to get container status \"5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049\": rpc error: code = NotFound desc = could not find container \"5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049\": container with ID starting with 5369572f7f1de7644870c7f4c8b6fb90232f1ec172ac5e7999e23a3fd3f6c049 not found: ID does not exist" Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.037091 4718 scope.go:117] "RemoveContainer" containerID="204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52" Nov 23 15:31:26 crc kubenswrapper[4718]: E1123 15:31:26.038694 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52\": container with ID starting with 204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52 not found: ID does not exist" containerID="204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52" Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.038743 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52"} err="failed to get container status \"204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52\": rpc error: code = NotFound desc = could not find container \"204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52\": container with ID starting with 204b5ac9a9f7a0ba3d5d456a423bf2ce399649b5c33cb50767993f548c0b0c52 not found: ID does not exist" Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.038757 4718 scope.go:117] "RemoveContainer" containerID="4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c" Nov 23 15:31:26 crc kubenswrapper[4718]: E1123 15:31:26.039256 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c\": container with ID starting with 4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c not found: ID does not exist" containerID="4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c" Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.039284 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c"} err="failed to get container status \"4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c\": rpc error: code = NotFound desc = could not find container \"4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c\": container with ID starting with 4ed76aa157dbbdf4ee157f13cfc5e970005da8778d4b806a327a7abd2d88b98c not found: ID does not exist" Nov 23 15:31:26 crc kubenswrapper[4718]: I1123 15:31:26.458978 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" path="/var/lib/kubelet/pods/bc1ca8fc-ccb3-4eba-b542-8feeb35a383a/volumes" Nov 23 15:31:27 crc kubenswrapper[4718]: I1123 15:31:27.040757 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8vtn" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="registry-server" probeResult="failure" output=< Nov 23 15:31:27 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Nov 23 15:31:27 crc kubenswrapper[4718]: > Nov 23 15:31:36 crc kubenswrapper[4718]: I1123 15:31:36.044158 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:36 crc kubenswrapper[4718]: I1123 15:31:36.098872 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:36 crc kubenswrapper[4718]: I1123 15:31:36.281603 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8vtn"] Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.060018 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m8vtn" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="registry-server" containerID="cri-o://6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05" gracePeriod=2 Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.568417 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.726755 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-utilities\") pod \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.726891 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5k4v\" (UniqueName: \"kubernetes.io/projected/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-kube-api-access-z5k4v\") pod \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.726941 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-catalog-content\") pod \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\" (UID: \"0f3c8363-4fce-4f1d-99ba-216df86cd7b3\") " Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.727703 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-utilities" (OuterVolumeSpecName: "utilities") pod "0f3c8363-4fce-4f1d-99ba-216df86cd7b3" (UID: "0f3c8363-4fce-4f1d-99ba-216df86cd7b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.733511 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-kube-api-access-z5k4v" (OuterVolumeSpecName: "kube-api-access-z5k4v") pod "0f3c8363-4fce-4f1d-99ba-216df86cd7b3" (UID: "0f3c8363-4fce-4f1d-99ba-216df86cd7b3"). InnerVolumeSpecName "kube-api-access-z5k4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.829336 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.829370 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5k4v\" (UniqueName: \"kubernetes.io/projected/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-kube-api-access-z5k4v\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.831261 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f3c8363-4fce-4f1d-99ba-216df86cd7b3" (UID: "0f3c8363-4fce-4f1d-99ba-216df86cd7b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:31:38 crc kubenswrapper[4718]: I1123 15:31:38.931673 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3c8363-4fce-4f1d-99ba-216df86cd7b3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.070522 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerID="6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05" exitCode=0 Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.070572 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vtn" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.070574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerDied","Data":"6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05"} Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.070627 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vtn" event={"ID":"0f3c8363-4fce-4f1d-99ba-216df86cd7b3","Type":"ContainerDied","Data":"94e024f4fda5ad8034288f457d1388e17ea7cb237dcdf6660e147b848594f79e"} Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.070656 4718 scope.go:117] "RemoveContainer" containerID="6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.095751 4718 scope.go:117] "RemoveContainer" containerID="e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.110554 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8vtn"] Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.119409 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m8vtn"] Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.128196 4718 scope.go:117] "RemoveContainer" containerID="6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.186572 4718 scope.go:117] "RemoveContainer" containerID="6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05" Nov 23 15:31:39 crc kubenswrapper[4718]: E1123 15:31:39.187481 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05\": container with ID starting with 6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05 not found: ID does not exist" containerID="6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.187517 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05"} err="failed to get container status \"6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05\": rpc error: code = NotFound desc = could not find container \"6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05\": container with ID starting with 6c5b2518889dfa0baeb159660df65c14744bd56e07952038b6f5fffd8b838c05 not found: ID does not exist" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.187560 4718 scope.go:117] "RemoveContainer" containerID="e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2" Nov 23 15:31:39 crc kubenswrapper[4718]: E1123 15:31:39.188235 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2\": container with ID starting with e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2 not found: ID does not exist" containerID="e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.188279 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2"} err="failed to get container status \"e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2\": rpc error: code = NotFound desc = could not find container \"e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2\": container with ID starting with e3455284d4bc82581e26df07357bd6f76bb975028e6c91ac3f2601709bd8c2c2 not found: ID does not exist" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.188311 4718 scope.go:117] "RemoveContainer" containerID="6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5" Nov 23 15:31:39 crc kubenswrapper[4718]: E1123 15:31:39.190310 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5\": container with ID starting with 6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5 not found: ID does not exist" containerID="6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5" Nov 23 15:31:39 crc kubenswrapper[4718]: I1123 15:31:39.190381 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5"} err="failed to get container status \"6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5\": rpc error: code = NotFound desc = could not find container \"6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5\": container with ID starting with 6603c9690c587cb66ce5666aacbca1b1a885b8634b5164ad31f38a76a6e27ce5 not found: ID does not exist" Nov 23 15:31:40 crc kubenswrapper[4718]: I1123 15:31:40.456552 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" path="/var/lib/kubelet/pods/0f3c8363-4fce-4f1d-99ba-216df86cd7b3/volumes" Nov 23 15:32:53 crc kubenswrapper[4718]: I1123 15:32:53.053794 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:32:53 crc kubenswrapper[4718]: I1123 15:32:53.054425 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:33:23 crc kubenswrapper[4718]: I1123 15:33:23.053284 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:33:23 crc kubenswrapper[4718]: I1123 15:33:23.053954 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.053293 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.053885 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.053944 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.054878 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.054969 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" gracePeriod=600 Nov 23 15:33:53 crc kubenswrapper[4718]: E1123 15:33:53.248558 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.385710 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" exitCode=0 Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.385794 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8"} Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.386221 4718 scope.go:117] "RemoveContainer" containerID="f1a6f0050160fd8439a472e79a076aad465bfef06eedf01d8fded7481caa3ce4" Nov 23 15:33:53 crc kubenswrapper[4718]: I1123 15:33:53.387378 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:33:53 crc kubenswrapper[4718]: E1123 15:33:53.390527 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:34:04 crc kubenswrapper[4718]: I1123 15:34:04.441431 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:34:04 crc kubenswrapper[4718]: E1123 15:34:04.442284 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:34:19 crc kubenswrapper[4718]: I1123 15:34:19.441353 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:34:19 crc kubenswrapper[4718]: E1123 15:34:19.442207 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:34:30 crc kubenswrapper[4718]: I1123 15:34:30.453023 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:34:30 crc kubenswrapper[4718]: E1123 15:34:30.453767 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:34:42 crc kubenswrapper[4718]: I1123 15:34:42.441529 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:34:42 crc kubenswrapper[4718]: E1123 15:34:42.442283 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:34:54 crc kubenswrapper[4718]: I1123 15:34:54.441264 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:34:54 crc kubenswrapper[4718]: E1123 15:34:54.442164 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:35:07 crc kubenswrapper[4718]: I1123 15:35:07.441538 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:35:07 crc kubenswrapper[4718]: E1123 15:35:07.444042 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:35:19 crc kubenswrapper[4718]: I1123 15:35:19.442781 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:35:19 crc kubenswrapper[4718]: E1123 15:35:19.443943 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:35:32 crc kubenswrapper[4718]: I1123 15:35:32.441112 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:35:32 crc kubenswrapper[4718]: E1123 15:35:32.441929 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:35:44 crc kubenswrapper[4718]: I1123 15:35:44.441133 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:35:44 crc kubenswrapper[4718]: E1123 15:35:44.442046 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:35:56 crc kubenswrapper[4718]: I1123 15:35:56.441316 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:35:56 crc kubenswrapper[4718]: E1123 15:35:56.442091 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:36:11 crc kubenswrapper[4718]: I1123 15:36:11.441021 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:36:11 crc kubenswrapper[4718]: E1123 15:36:11.441775 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:36:22 crc kubenswrapper[4718]: I1123 15:36:22.441019 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:36:22 crc kubenswrapper[4718]: E1123 15:36:22.441779 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:36:37 crc kubenswrapper[4718]: I1123 15:36:37.441244 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:36:37 crc kubenswrapper[4718]: E1123 15:36:37.442103 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:36:50 crc kubenswrapper[4718]: I1123 15:36:50.447771 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:36:50 crc kubenswrapper[4718]: E1123 15:36:50.448538 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:37:05 crc kubenswrapper[4718]: I1123 15:37:05.441245 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:37:05 crc kubenswrapper[4718]: E1123 15:37:05.443234 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:37:16 crc kubenswrapper[4718]: I1123 15:37:16.441422 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:37:16 crc kubenswrapper[4718]: E1123 15:37:16.442199 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:37:29 crc kubenswrapper[4718]: I1123 15:37:29.441260 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:37:29 crc kubenswrapper[4718]: E1123 15:37:29.441915 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:37:43 crc kubenswrapper[4718]: I1123 15:37:43.440719 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:37:43 crc kubenswrapper[4718]: E1123 15:37:43.441287 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:37:57 crc kubenswrapper[4718]: I1123 15:37:57.441834 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:37:57 crc kubenswrapper[4718]: E1123 15:37:57.442721 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:38:12 crc kubenswrapper[4718]: I1123 15:38:12.441248 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:38:12 crc kubenswrapper[4718]: E1123 15:38:12.442159 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:38:26 crc kubenswrapper[4718]: I1123 15:38:26.441714 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:38:26 crc kubenswrapper[4718]: E1123 15:38:26.442287 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:38:40 crc kubenswrapper[4718]: I1123 15:38:40.447500 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:38:40 crc kubenswrapper[4718]: E1123 15:38:40.448170 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:38:51 crc kubenswrapper[4718]: I1123 15:38:51.440760 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:38:51 crc kubenswrapper[4718]: E1123 15:38:51.442413 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:39:02 crc kubenswrapper[4718]: I1123 15:39:02.442037 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:39:03 crc kubenswrapper[4718]: I1123 15:39:03.338691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"74327bd72e539c965b0611b0786967503764bdb42e874d7079114964edd8c8fa"} Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.708611 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hp2fx"] Nov 23 15:40:53 crc kubenswrapper[4718]: E1123 15:40:53.709562 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="extract-utilities" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709578 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="extract-utilities" Nov 23 15:40:53 crc kubenswrapper[4718]: E1123 15:40:53.709594 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="registry-server" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709603 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="registry-server" Nov 23 15:40:53 crc kubenswrapper[4718]: E1123 15:40:53.709626 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="extract-utilities" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709634 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="extract-utilities" Nov 23 15:40:53 crc kubenswrapper[4718]: E1123 15:40:53.709650 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="extract-content" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709659 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="extract-content" Nov 23 15:40:53 crc kubenswrapper[4718]: E1123 15:40:53.709689 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="registry-server" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709697 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="registry-server" Nov 23 15:40:53 crc kubenswrapper[4718]: E1123 15:40:53.709709 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="extract-content" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709717 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="extract-content" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709961 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3c8363-4fce-4f1d-99ba-216df86cd7b3" containerName="registry-server" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.709999 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1ca8fc-ccb3-4eba-b542-8feeb35a383a" containerName="registry-server" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.711623 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.737370 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hp2fx"] Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.806380 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-catalog-content\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.806628 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zvm\" (UniqueName: \"kubernetes.io/projected/3b779871-f750-48e4-94fe-7c49dd547516-kube-api-access-g5zvm\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.806801 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-utilities\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.908172 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-utilities\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.908275 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-catalog-content\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.908364 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zvm\" (UniqueName: \"kubernetes.io/projected/3b779871-f750-48e4-94fe-7c49dd547516-kube-api-access-g5zvm\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.908835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-utilities\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.908903 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-catalog-content\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:53 crc kubenswrapper[4718]: I1123 15:40:53.930089 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zvm\" (UniqueName: \"kubernetes.io/projected/3b779871-f750-48e4-94fe-7c49dd547516-kube-api-access-g5zvm\") pod \"certified-operators-hp2fx\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:54 crc kubenswrapper[4718]: I1123 15:40:54.034621 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:40:54 crc kubenswrapper[4718]: I1123 15:40:54.500488 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hp2fx"] Nov 23 15:40:54 crc kubenswrapper[4718]: I1123 15:40:54.739212 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerStarted","Data":"16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0"} Nov 23 15:40:54 crc kubenswrapper[4718]: I1123 15:40:54.739518 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerStarted","Data":"73aaf04ef38a293928b3d15aec5d86fc8f49c0aa7337938cb53d09872c4ed4f3"} Nov 23 15:40:55 crc kubenswrapper[4718]: I1123 15:40:55.753136 4718 generic.go:334] "Generic (PLEG): container finished" podID="3b779871-f750-48e4-94fe-7c49dd547516" containerID="16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0" exitCode=0 Nov 23 15:40:55 crc kubenswrapper[4718]: I1123 15:40:55.753263 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerDied","Data":"16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0"} Nov 23 15:40:55 crc kubenswrapper[4718]: I1123 15:40:55.758067 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 15:40:56 crc kubenswrapper[4718]: I1123 15:40:56.765781 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerStarted","Data":"7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166"} Nov 23 15:40:57 crc kubenswrapper[4718]: I1123 15:40:57.776408 4718 generic.go:334] "Generic (PLEG): container finished" podID="3b779871-f750-48e4-94fe-7c49dd547516" containerID="7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166" exitCode=0 Nov 23 15:40:57 crc kubenswrapper[4718]: I1123 15:40:57.776518 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerDied","Data":"7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166"} Nov 23 15:40:58 crc kubenswrapper[4718]: I1123 15:40:58.789707 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerStarted","Data":"e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90"} Nov 23 15:40:58 crc kubenswrapper[4718]: I1123 15:40:58.824516 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hp2fx" podStartSLOduration=3.130230275 podStartE2EDuration="5.824489341s" podCreationTimestamp="2025-11-23 15:40:53 +0000 UTC" firstStartedPulling="2025-11-23 15:40:55.75761028 +0000 UTC m=+3306.997230164" lastFinishedPulling="2025-11-23 15:40:58.451869376 +0000 UTC m=+3309.691489230" observedRunningTime="2025-11-23 15:40:58.810067864 +0000 UTC m=+3310.049687708" watchObservedRunningTime="2025-11-23 15:40:58.824489341 +0000 UTC m=+3310.064109215" Nov 23 15:41:04 crc kubenswrapper[4718]: I1123 15:41:04.035317 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:41:04 crc kubenswrapper[4718]: I1123 15:41:04.036262 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:41:04 crc kubenswrapper[4718]: I1123 15:41:04.102901 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:41:04 crc kubenswrapper[4718]: I1123 15:41:04.909472 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:41:04 crc kubenswrapper[4718]: I1123 15:41:04.957605 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hp2fx"] Nov 23 15:41:06 crc kubenswrapper[4718]: I1123 15:41:06.870184 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hp2fx" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="registry-server" containerID="cri-o://e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90" gracePeriod=2 Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.732909 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.879539 4718 generic.go:334] "Generic (PLEG): container finished" podID="3b779871-f750-48e4-94fe-7c49dd547516" containerID="e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90" exitCode=0 Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.879589 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerDied","Data":"e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90"} Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.879616 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp2fx" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.879641 4718 scope.go:117] "RemoveContainer" containerID="e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.879627 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp2fx" event={"ID":"3b779871-f750-48e4-94fe-7c49dd547516","Type":"ContainerDied","Data":"73aaf04ef38a293928b3d15aec5d86fc8f49c0aa7337938cb53d09872c4ed4f3"} Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.885076 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-utilities\") pod \"3b779871-f750-48e4-94fe-7c49dd547516\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.885160 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-catalog-content\") pod \"3b779871-f750-48e4-94fe-7c49dd547516\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.885543 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5zvm\" (UniqueName: \"kubernetes.io/projected/3b779871-f750-48e4-94fe-7c49dd547516-kube-api-access-g5zvm\") pod \"3b779871-f750-48e4-94fe-7c49dd547516\" (UID: \"3b779871-f750-48e4-94fe-7c49dd547516\") " Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.886243 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-utilities" (OuterVolumeSpecName: "utilities") pod "3b779871-f750-48e4-94fe-7c49dd547516" (UID: "3b779871-f750-48e4-94fe-7c49dd547516"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.893055 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b779871-f750-48e4-94fe-7c49dd547516-kube-api-access-g5zvm" (OuterVolumeSpecName: "kube-api-access-g5zvm") pod "3b779871-f750-48e4-94fe-7c49dd547516" (UID: "3b779871-f750-48e4-94fe-7c49dd547516"). InnerVolumeSpecName "kube-api-access-g5zvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.904164 4718 scope.go:117] "RemoveContainer" containerID="7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.947384 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b779871-f750-48e4-94fe-7c49dd547516" (UID: "3b779871-f750-48e4-94fe-7c49dd547516"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.947524 4718 scope.go:117] "RemoveContainer" containerID="16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.988022 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.988073 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b779871-f750-48e4-94fe-7c49dd547516-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.988280 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5zvm\" (UniqueName: \"kubernetes.io/projected/3b779871-f750-48e4-94fe-7c49dd547516-kube-api-access-g5zvm\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.993823 4718 scope.go:117] "RemoveContainer" containerID="e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90" Nov 23 15:41:07 crc kubenswrapper[4718]: E1123 15:41:07.994515 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90\": container with ID starting with e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90 not found: ID does not exist" containerID="e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.994552 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90"} err="failed to get container status \"e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90\": rpc error: code = NotFound desc = could not find container \"e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90\": container with ID starting with e1d1128651cdeb3d70a92c79f7fecec51edfcc5e4b6076993626a8617cf3fc90 not found: ID does not exist" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.994578 4718 scope.go:117] "RemoveContainer" containerID="7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166" Nov 23 15:41:07 crc kubenswrapper[4718]: E1123 15:41:07.994924 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166\": container with ID starting with 7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166 not found: ID does not exist" containerID="7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.994953 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166"} err="failed to get container status \"7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166\": rpc error: code = NotFound desc = could not find container \"7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166\": container with ID starting with 7c74295fb955c5e4a28c431a21d0f955c2e723a854c1de89ecf1d4ea4d0fc166 not found: ID does not exist" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.994969 4718 scope.go:117] "RemoveContainer" containerID="16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0" Nov 23 15:41:07 crc kubenswrapper[4718]: E1123 15:41:07.995209 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0\": container with ID starting with 16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0 not found: ID does not exist" containerID="16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0" Nov 23 15:41:07 crc kubenswrapper[4718]: I1123 15:41:07.995235 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0"} err="failed to get container status \"16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0\": rpc error: code = NotFound desc = could not find container \"16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0\": container with ID starting with 16ab28dca67bf19c037b8ec1dea636e9916eb758137690d1a82eeeef39041fb0 not found: ID does not exist" Nov 23 15:41:08 crc kubenswrapper[4718]: I1123 15:41:08.215517 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hp2fx"] Nov 23 15:41:08 crc kubenswrapper[4718]: I1123 15:41:08.223049 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hp2fx"] Nov 23 15:41:08 crc kubenswrapper[4718]: I1123 15:41:08.451735 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b779871-f750-48e4-94fe-7c49dd547516" path="/var/lib/kubelet/pods/3b779871-f750-48e4-94fe-7c49dd547516/volumes" Nov 23 15:41:23 crc kubenswrapper[4718]: I1123 15:41:23.053333 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:41:23 crc kubenswrapper[4718]: I1123 15:41:23.054017 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.266029 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxgh"] Nov 23 15:41:32 crc kubenswrapper[4718]: E1123 15:41:32.267176 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="extract-content" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.267192 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="extract-content" Nov 23 15:41:32 crc kubenswrapper[4718]: E1123 15:41:32.267233 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="extract-utilities" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.267269 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="extract-utilities" Nov 23 15:41:32 crc kubenswrapper[4718]: E1123 15:41:32.267294 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="registry-server" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.267303 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="registry-server" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.267555 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b779871-f750-48e4-94fe-7c49dd547516" containerName="registry-server" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.269350 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.283219 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxgh"] Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.339562 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-catalog-content\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.339647 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-utilities\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.339691 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wg2\" (UniqueName: \"kubernetes.io/projected/a11598f2-9b89-4792-a637-6faeb5185b50-kube-api-access-68wg2\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.440858 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68wg2\" (UniqueName: \"kubernetes.io/projected/a11598f2-9b89-4792-a637-6faeb5185b50-kube-api-access-68wg2\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.441027 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-catalog-content\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.441068 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-utilities\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.441576 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-utilities\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.441982 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-catalog-content\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.465713 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68wg2\" (UniqueName: \"kubernetes.io/projected/a11598f2-9b89-4792-a637-6faeb5185b50-kube-api-access-68wg2\") pod \"redhat-marketplace-lsxgh\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:32 crc kubenswrapper[4718]: I1123 15:41:32.604178 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:33 crc kubenswrapper[4718]: I1123 15:41:33.100211 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxgh"] Nov 23 15:41:33 crc kubenswrapper[4718]: I1123 15:41:33.155343 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxgh" event={"ID":"a11598f2-9b89-4792-a637-6faeb5185b50","Type":"ContainerStarted","Data":"bb7d2c2b225473f770b9792cf943599da13a6f3b5fbce9ea24702bd480c58975"} Nov 23 15:41:34 crc kubenswrapper[4718]: I1123 15:41:34.173652 4718 generic.go:334] "Generic (PLEG): container finished" podID="a11598f2-9b89-4792-a637-6faeb5185b50" containerID="c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446" exitCode=0 Nov 23 15:41:34 crc kubenswrapper[4718]: I1123 15:41:34.173770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxgh" event={"ID":"a11598f2-9b89-4792-a637-6faeb5185b50","Type":"ContainerDied","Data":"c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446"} Nov 23 15:41:36 crc kubenswrapper[4718]: I1123 15:41:36.202270 4718 generic.go:334] "Generic (PLEG): container finished" podID="a11598f2-9b89-4792-a637-6faeb5185b50" containerID="2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f" exitCode=0 Nov 23 15:41:36 crc kubenswrapper[4718]: I1123 15:41:36.202333 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxgh" event={"ID":"a11598f2-9b89-4792-a637-6faeb5185b50","Type":"ContainerDied","Data":"2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f"} Nov 23 15:41:37 crc kubenswrapper[4718]: I1123 15:41:37.213730 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxgh" event={"ID":"a11598f2-9b89-4792-a637-6faeb5185b50","Type":"ContainerStarted","Data":"d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3"} Nov 23 15:41:37 crc kubenswrapper[4718]: I1123 15:41:37.238251 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lsxgh" podStartSLOduration=2.790665979 podStartE2EDuration="5.238227928s" podCreationTimestamp="2025-11-23 15:41:32 +0000 UTC" firstStartedPulling="2025-11-23 15:41:34.177345453 +0000 UTC m=+3345.416965297" lastFinishedPulling="2025-11-23 15:41:36.624907392 +0000 UTC m=+3347.864527246" observedRunningTime="2025-11-23 15:41:37.234770063 +0000 UTC m=+3348.474389907" watchObservedRunningTime="2025-11-23 15:41:37.238227928 +0000 UTC m=+3348.477847772" Nov 23 15:41:42 crc kubenswrapper[4718]: I1123 15:41:42.604477 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:42 crc kubenswrapper[4718]: I1123 15:41:42.605025 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:42 crc kubenswrapper[4718]: I1123 15:41:42.653767 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:43 crc kubenswrapper[4718]: I1123 15:41:43.324416 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:43 crc kubenswrapper[4718]: I1123 15:41:43.380091 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxgh"] Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.285141 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lsxgh" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="registry-server" containerID="cri-o://d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3" gracePeriod=2 Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.298837 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vhnlj"] Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.301297 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.321372 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhnlj"] Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.407565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-utilities\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.407619 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-catalog-content\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.407650 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrk9\" (UniqueName: \"kubernetes.io/projected/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-kube-api-access-6xrk9\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.510607 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-utilities\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.510945 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-catalog-content\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.510987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrk9\" (UniqueName: \"kubernetes.io/projected/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-kube-api-access-6xrk9\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.511921 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-utilities\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.511994 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-catalog-content\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.531470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrk9\" (UniqueName: \"kubernetes.io/projected/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-kube-api-access-6xrk9\") pod \"redhat-operators-vhnlj\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.623643 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:45 crc kubenswrapper[4718]: I1123 15:41:45.853295 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.020987 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-catalog-content\") pod \"a11598f2-9b89-4792-a637-6faeb5185b50\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.021141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-utilities\") pod \"a11598f2-9b89-4792-a637-6faeb5185b50\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.021256 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68wg2\" (UniqueName: \"kubernetes.io/projected/a11598f2-9b89-4792-a637-6faeb5185b50-kube-api-access-68wg2\") pod \"a11598f2-9b89-4792-a637-6faeb5185b50\" (UID: \"a11598f2-9b89-4792-a637-6faeb5185b50\") " Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.023358 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-utilities" (OuterVolumeSpecName: "utilities") pod "a11598f2-9b89-4792-a637-6faeb5185b50" (UID: "a11598f2-9b89-4792-a637-6faeb5185b50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.026368 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11598f2-9b89-4792-a637-6faeb5185b50-kube-api-access-68wg2" (OuterVolumeSpecName: "kube-api-access-68wg2") pod "a11598f2-9b89-4792-a637-6faeb5185b50" (UID: "a11598f2-9b89-4792-a637-6faeb5185b50"). InnerVolumeSpecName "kube-api-access-68wg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.045134 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a11598f2-9b89-4792-a637-6faeb5185b50" (UID: "a11598f2-9b89-4792-a637-6faeb5185b50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.123737 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.123779 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11598f2-9b89-4792-a637-6faeb5185b50-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.123792 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68wg2\" (UniqueName: \"kubernetes.io/projected/a11598f2-9b89-4792-a637-6faeb5185b50-kube-api-access-68wg2\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.138819 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhnlj"] Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.302629 4718 generic.go:334] "Generic (PLEG): container finished" podID="a11598f2-9b89-4792-a637-6faeb5185b50" containerID="d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3" exitCode=0 Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.302875 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxgh" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.302909 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxgh" event={"ID":"a11598f2-9b89-4792-a637-6faeb5185b50","Type":"ContainerDied","Data":"d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3"} Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.303496 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxgh" event={"ID":"a11598f2-9b89-4792-a637-6faeb5185b50","Type":"ContainerDied","Data":"bb7d2c2b225473f770b9792cf943599da13a6f3b5fbce9ea24702bd480c58975"} Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.303518 4718 scope.go:117] "RemoveContainer" containerID="d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.307136 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhnlj" event={"ID":"0f9d5a44-04b0-4faf-894d-0bb1989a7a61","Type":"ContainerStarted","Data":"1debdd39883de547cdd1c789dfc926647ff6f775f4cbe83ff00ff03cf8412937"} Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.340296 4718 scope.go:117] "RemoveContainer" containerID="2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.341819 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxgh"] Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.357160 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxgh"] Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.371847 4718 scope.go:117] "RemoveContainer" containerID="c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.453983 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" path="/var/lib/kubelet/pods/a11598f2-9b89-4792-a637-6faeb5185b50/volumes" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.492663 4718 scope.go:117] "RemoveContainer" containerID="d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3" Nov 23 15:41:46 crc kubenswrapper[4718]: E1123 15:41:46.497620 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3\": container with ID starting with d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3 not found: ID does not exist" containerID="d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.497694 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3"} err="failed to get container status \"d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3\": rpc error: code = NotFound desc = could not find container \"d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3\": container with ID starting with d4f6450a8cdb220136887403b4e91b14f2680eceab3a68a6acc5d3371855a4d3 not found: ID does not exist" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.497733 4718 scope.go:117] "RemoveContainer" containerID="2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f" Nov 23 15:41:46 crc kubenswrapper[4718]: E1123 15:41:46.501749 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f\": container with ID starting with 2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f not found: ID does not exist" containerID="2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.501788 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f"} err="failed to get container status \"2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f\": rpc error: code = NotFound desc = could not find container \"2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f\": container with ID starting with 2a8306e4d9471ce8651d523e55c94db4ff2883667ae05ad1f5e297218408e31f not found: ID does not exist" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.501812 4718 scope.go:117] "RemoveContainer" containerID="c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446" Nov 23 15:41:46 crc kubenswrapper[4718]: E1123 15:41:46.502404 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446\": container with ID starting with c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446 not found: ID does not exist" containerID="c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446" Nov 23 15:41:46 crc kubenswrapper[4718]: I1123 15:41:46.502469 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446"} err="failed to get container status \"c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446\": rpc error: code = NotFound desc = could not find container \"c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446\": container with ID starting with c889bb353786b6ab8e8815b44bea7f10bd4289e400d90c28ee70f83af659d446 not found: ID does not exist" Nov 23 15:41:47 crc kubenswrapper[4718]: I1123 15:41:47.320686 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerID="26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b" exitCode=0 Nov 23 15:41:47 crc kubenswrapper[4718]: I1123 15:41:47.320733 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhnlj" event={"ID":"0f9d5a44-04b0-4faf-894d-0bb1989a7a61","Type":"ContainerDied","Data":"26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b"} Nov 23 15:41:48 crc kubenswrapper[4718]: I1123 15:41:48.335986 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhnlj" event={"ID":"0f9d5a44-04b0-4faf-894d-0bb1989a7a61","Type":"ContainerStarted","Data":"0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61"} Nov 23 15:41:49 crc kubenswrapper[4718]: I1123 15:41:49.355083 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerID="0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61" exitCode=0 Nov 23 15:41:49 crc kubenswrapper[4718]: I1123 15:41:49.355193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhnlj" event={"ID":"0f9d5a44-04b0-4faf-894d-0bb1989a7a61","Type":"ContainerDied","Data":"0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61"} Nov 23 15:41:50 crc kubenswrapper[4718]: I1123 15:41:50.367288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhnlj" event={"ID":"0f9d5a44-04b0-4faf-894d-0bb1989a7a61","Type":"ContainerStarted","Data":"38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1"} Nov 23 15:41:50 crc kubenswrapper[4718]: I1123 15:41:50.389749 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vhnlj" podStartSLOduration=2.9574170669999997 podStartE2EDuration="5.389723356s" podCreationTimestamp="2025-11-23 15:41:45 +0000 UTC" firstStartedPulling="2025-11-23 15:41:47.323703938 +0000 UTC m=+3358.563323772" lastFinishedPulling="2025-11-23 15:41:49.756010217 +0000 UTC m=+3360.995630061" observedRunningTime="2025-11-23 15:41:50.382726363 +0000 UTC m=+3361.622346227" watchObservedRunningTime="2025-11-23 15:41:50.389723356 +0000 UTC m=+3361.629343200" Nov 23 15:41:51 crc kubenswrapper[4718]: I1123 15:41:51.378338 4718 generic.go:334] "Generic (PLEG): container finished" podID="2053f5ea-ae54-4b1d-951f-2355f69f1062" containerID="043ae55e25ed5857f85545535cba3475e54819fa5b1a7dbc1d338d30a63a6ff6" exitCode=0 Nov 23 15:41:51 crc kubenswrapper[4718]: I1123 15:41:51.378416 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2053f5ea-ae54-4b1d-951f-2355f69f1062","Type":"ContainerDied","Data":"043ae55e25ed5857f85545535cba3475e54819fa5b1a7dbc1d338d30a63a6ff6"} Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.774838 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.845848 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9sf\" (UniqueName: \"kubernetes.io/projected/2053f5ea-ae54-4b1d-951f-2355f69f1062-kube-api-access-fw9sf\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.845939 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config-secret\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.845968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-config-data\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.846036 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.846060 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ca-certs\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.846122 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.846158 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ssh-key\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.846192 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-workdir\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.846227 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-temporary\") pod \"2053f5ea-ae54-4b1d-951f-2355f69f1062\" (UID: \"2053f5ea-ae54-4b1d-951f-2355f69f1062\") " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.847263 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.852238 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2053f5ea-ae54-4b1d-951f-2355f69f1062-kube-api-access-fw9sf" (OuterVolumeSpecName: "kube-api-access-fw9sf") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "kube-api-access-fw9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.858615 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.859700 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-config-data" (OuterVolumeSpecName: "config-data") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.872523 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.875621 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.879378 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.894049 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.915488 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2053f5ea-ae54-4b1d-951f-2355f69f1062" (UID: "2053f5ea-ae54-4b1d-951f-2355f69f1062"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948403 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948459 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948469 4718 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948481 4718 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2053f5ea-ae54-4b1d-951f-2355f69f1062-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948492 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9sf\" (UniqueName: \"kubernetes.io/projected/2053f5ea-ae54-4b1d-951f-2355f69f1062-kube-api-access-fw9sf\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948501 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948509 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2053f5ea-ae54-4b1d-951f-2355f69f1062-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948541 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.948550 4718 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2053f5ea-ae54-4b1d-951f-2355f69f1062-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:52 crc kubenswrapper[4718]: I1123 15:41:52.968992 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 23 15:41:53 crc kubenswrapper[4718]: I1123 15:41:53.050032 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 23 15:41:53 crc kubenswrapper[4718]: I1123 15:41:53.053526 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:41:53 crc kubenswrapper[4718]: I1123 15:41:53.053573 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:41:53 crc kubenswrapper[4718]: I1123 15:41:53.420448 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2053f5ea-ae54-4b1d-951f-2355f69f1062","Type":"ContainerDied","Data":"7912770b337a0c27cf8daf277ecc665b62b6df8e9c9bccb71604a96f6fb9144e"} Nov 23 15:41:53 crc kubenswrapper[4718]: I1123 15:41:53.420747 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7912770b337a0c27cf8daf277ecc665b62b6df8e9c9bccb71604a96f6fb9144e" Nov 23 15:41:53 crc kubenswrapper[4718]: I1123 15:41:53.420533 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 15:41:55 crc kubenswrapper[4718]: I1123 15:41:55.624406 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:55 crc kubenswrapper[4718]: I1123 15:41:55.624696 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.263195 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 15:41:56 crc kubenswrapper[4718]: E1123 15:41:56.264023 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2053f5ea-ae54-4b1d-951f-2355f69f1062" containerName="tempest-tests-tempest-tests-runner" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.264043 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2053f5ea-ae54-4b1d-951f-2355f69f1062" containerName="tempest-tests-tempest-tests-runner" Nov 23 15:41:56 crc kubenswrapper[4718]: E1123 15:41:56.264059 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="extract-content" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.264067 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="extract-content" Nov 23 15:41:56 crc kubenswrapper[4718]: E1123 15:41:56.264089 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="registry-server" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.264096 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="registry-server" Nov 23 15:41:56 crc kubenswrapper[4718]: E1123 15:41:56.264128 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="extract-utilities" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.264140 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="extract-utilities" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.264360 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11598f2-9b89-4792-a637-6faeb5185b50" containerName="registry-server" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.264377 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2053f5ea-ae54-4b1d-951f-2355f69f1062" containerName="tempest-tests-tempest-tests-runner" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.265076 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.268588 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8m9rh" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.278022 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.314783 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eebef478-ad56-44b6-8ecf-20cc943f86b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.315049 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zhs\" (UniqueName: \"kubernetes.io/projected/eebef478-ad56-44b6-8ecf-20cc943f86b3-kube-api-access-c4zhs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eebef478-ad56-44b6-8ecf-20cc943f86b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.418509 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eebef478-ad56-44b6-8ecf-20cc943f86b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.418548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zhs\" (UniqueName: \"kubernetes.io/projected/eebef478-ad56-44b6-8ecf-20cc943f86b3-kube-api-access-c4zhs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eebef478-ad56-44b6-8ecf-20cc943f86b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.419201 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eebef478-ad56-44b6-8ecf-20cc943f86b3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.440254 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zhs\" (UniqueName: \"kubernetes.io/projected/eebef478-ad56-44b6-8ecf-20cc943f86b3-kube-api-access-c4zhs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eebef478-ad56-44b6-8ecf-20cc943f86b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.482121 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"eebef478-ad56-44b6-8ecf-20cc943f86b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.596143 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 15:41:56 crc kubenswrapper[4718]: I1123 15:41:56.701275 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vhnlj" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="registry-server" probeResult="failure" output=< Nov 23 15:41:56 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Nov 23 15:41:56 crc kubenswrapper[4718]: > Nov 23 15:41:57 crc kubenswrapper[4718]: I1123 15:41:57.091529 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 15:41:57 crc kubenswrapper[4718]: I1123 15:41:57.463667 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"eebef478-ad56-44b6-8ecf-20cc943f86b3","Type":"ContainerStarted","Data":"13bcd0bfb0dbbb96c1d9df1a4dbecdf3d2af898b186063cf455551569008af07"} Nov 23 15:41:59 crc kubenswrapper[4718]: I1123 15:41:59.486402 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"eebef478-ad56-44b6-8ecf-20cc943f86b3","Type":"ContainerStarted","Data":"2231b8abc337d55d5c72fa091d3259c2db5de94ec936cc704cf8c1755f7deac5"} Nov 23 15:41:59 crc kubenswrapper[4718]: I1123 15:41:59.509733 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.916145154 podStartE2EDuration="3.509714726s" podCreationTimestamp="2025-11-23 15:41:56 +0000 UTC" firstStartedPulling="2025-11-23 15:41:57.109045629 +0000 UTC m=+3368.348665473" lastFinishedPulling="2025-11-23 15:41:58.702615161 +0000 UTC m=+3369.942235045" observedRunningTime="2025-11-23 15:41:59.50009107 +0000 UTC m=+3370.739710964" watchObservedRunningTime="2025-11-23 15:41:59.509714726 +0000 UTC m=+3370.749334580" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.709673 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pbmk7"] Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.712066 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.723932 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pbmk7"] Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.779830 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-catalog-content\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.779921 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-utilities\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.780081 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgt9k\" (UniqueName: \"kubernetes.io/projected/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-kube-api-access-sgt9k\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.882619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgt9k\" (UniqueName: \"kubernetes.io/projected/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-kube-api-access-sgt9k\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.882730 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-catalog-content\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.882782 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-utilities\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.883337 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-utilities\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.883994 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-catalog-content\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:03 crc kubenswrapper[4718]: I1123 15:42:03.906819 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgt9k\" (UniqueName: \"kubernetes.io/projected/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-kube-api-access-sgt9k\") pod \"community-operators-pbmk7\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:04 crc kubenswrapper[4718]: I1123 15:42:04.032523 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:04 crc kubenswrapper[4718]: I1123 15:42:04.556309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pbmk7"] Nov 23 15:42:05 crc kubenswrapper[4718]: I1123 15:42:05.542290 4718 generic.go:334] "Generic (PLEG): container finished" podID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerID="2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2" exitCode=0 Nov 23 15:42:05 crc kubenswrapper[4718]: I1123 15:42:05.542413 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbmk7" event={"ID":"79fd204f-1eb1-42fe-a6e9-3cbb1825792e","Type":"ContainerDied","Data":"2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2"} Nov 23 15:42:05 crc kubenswrapper[4718]: I1123 15:42:05.542629 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbmk7" event={"ID":"79fd204f-1eb1-42fe-a6e9-3cbb1825792e","Type":"ContainerStarted","Data":"63bfa84e2115c9889d9dc59bbf5e566ce23c942d597b20da5fed298ad13344cc"} Nov 23 15:42:05 crc kubenswrapper[4718]: I1123 15:42:05.690747 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:42:05 crc kubenswrapper[4718]: I1123 15:42:05.742494 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:42:07 crc kubenswrapper[4718]: I1123 15:42:07.565362 4718 generic.go:334] "Generic (PLEG): container finished" podID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerID="2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883" exitCode=0 Nov 23 15:42:07 crc kubenswrapper[4718]: I1123 15:42:07.565455 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbmk7" event={"ID":"79fd204f-1eb1-42fe-a6e9-3cbb1825792e","Type":"ContainerDied","Data":"2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883"} Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.094267 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhnlj"] Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.094589 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vhnlj" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="registry-server" containerID="cri-o://38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1" gracePeriod=2 Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.576473 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.577154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbmk7" event={"ID":"79fd204f-1eb1-42fe-a6e9-3cbb1825792e","Type":"ContainerStarted","Data":"1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47"} Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.585142 4718 generic.go:334] "Generic (PLEG): container finished" podID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerID="38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1" exitCode=0 Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.585193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhnlj" event={"ID":"0f9d5a44-04b0-4faf-894d-0bb1989a7a61","Type":"ContainerDied","Data":"38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1"} Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.585216 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhnlj" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.585240 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhnlj" event={"ID":"0f9d5a44-04b0-4faf-894d-0bb1989a7a61","Type":"ContainerDied","Data":"1debdd39883de547cdd1c789dfc926647ff6f775f4cbe83ff00ff03cf8412937"} Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.585259 4718 scope.go:117] "RemoveContainer" containerID="38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.613560 4718 scope.go:117] "RemoveContainer" containerID="0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.637344 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pbmk7" podStartSLOduration=3.085119714 podStartE2EDuration="5.637329186s" podCreationTimestamp="2025-11-23 15:42:03 +0000 UTC" firstStartedPulling="2025-11-23 15:42:05.544960084 +0000 UTC m=+3376.784579968" lastFinishedPulling="2025-11-23 15:42:08.097169596 +0000 UTC m=+3379.336789440" observedRunningTime="2025-11-23 15:42:08.628522314 +0000 UTC m=+3379.868142168" watchObservedRunningTime="2025-11-23 15:42:08.637329186 +0000 UTC m=+3379.876949030" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.644020 4718 scope.go:117] "RemoveContainer" containerID="26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.685348 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-catalog-content\") pod \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.685592 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xrk9\" (UniqueName: \"kubernetes.io/projected/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-kube-api-access-6xrk9\") pod \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.685682 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-utilities\") pod \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\" (UID: \"0f9d5a44-04b0-4faf-894d-0bb1989a7a61\") " Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.688382 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-utilities" (OuterVolumeSpecName: "utilities") pod "0f9d5a44-04b0-4faf-894d-0bb1989a7a61" (UID: "0f9d5a44-04b0-4faf-894d-0bb1989a7a61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.689578 4718 scope.go:117] "RemoveContainer" containerID="38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1" Nov 23 15:42:08 crc kubenswrapper[4718]: E1123 15:42:08.690003 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1\": container with ID starting with 38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1 not found: ID does not exist" containerID="38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.690080 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1"} err="failed to get container status \"38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1\": rpc error: code = NotFound desc = could not find container \"38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1\": container with ID starting with 38957e446eb2fe3abe55c53a956f2bacb23778564a3f2bfa16d94921d4a769f1 not found: ID does not exist" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.690104 4718 scope.go:117] "RemoveContainer" containerID="0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61" Nov 23 15:42:08 crc kubenswrapper[4718]: E1123 15:42:08.690477 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61\": container with ID starting with 0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61 not found: ID does not exist" containerID="0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.690505 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61"} err="failed to get container status \"0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61\": rpc error: code = NotFound desc = could not find container \"0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61\": container with ID starting with 0842fe2d25252461e14d629c59b62c164c2662dfc497fdfc0a305deba3053e61 not found: ID does not exist" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.690521 4718 scope.go:117] "RemoveContainer" containerID="26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b" Nov 23 15:42:08 crc kubenswrapper[4718]: E1123 15:42:08.691483 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b\": container with ID starting with 26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b not found: ID does not exist" containerID="26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.691526 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b"} err="failed to get container status \"26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b\": rpc error: code = NotFound desc = could not find container \"26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b\": container with ID starting with 26c277e5a26939d4b95f5af0b2cebdd9f36061145df113c0b1ebe59c368cea2b not found: ID does not exist" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.698662 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-kube-api-access-6xrk9" (OuterVolumeSpecName: "kube-api-access-6xrk9") pod "0f9d5a44-04b0-4faf-894d-0bb1989a7a61" (UID: "0f9d5a44-04b0-4faf-894d-0bb1989a7a61"). InnerVolumeSpecName "kube-api-access-6xrk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.771971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f9d5a44-04b0-4faf-894d-0bb1989a7a61" (UID: "0f9d5a44-04b0-4faf-894d-0bb1989a7a61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.788867 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xrk9\" (UniqueName: \"kubernetes.io/projected/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-kube-api-access-6xrk9\") on node \"crc\" DevicePath \"\"" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.788929 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.788946 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9d5a44-04b0-4faf-894d-0bb1989a7a61-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.918821 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhnlj"] Nov 23 15:42:08 crc kubenswrapper[4718]: I1123 15:42:08.925412 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vhnlj"] Nov 23 15:42:10 crc kubenswrapper[4718]: I1123 15:42:10.451252 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" path="/var/lib/kubelet/pods/0f9d5a44-04b0-4faf-894d-0bb1989a7a61/volumes" Nov 23 15:42:14 crc kubenswrapper[4718]: I1123 15:42:14.033334 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:14 crc kubenswrapper[4718]: I1123 15:42:14.033377 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:14 crc kubenswrapper[4718]: I1123 15:42:14.087800 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:14 crc kubenswrapper[4718]: I1123 15:42:14.685901 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:14 crc kubenswrapper[4718]: I1123 15:42:14.731114 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pbmk7"] Nov 23 15:42:16 crc kubenswrapper[4718]: I1123 15:42:16.663046 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pbmk7" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="registry-server" containerID="cri-o://1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47" gracePeriod=2 Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.127167 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.266217 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-catalog-content\") pod \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.266312 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-utilities\") pod \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.266389 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgt9k\" (UniqueName: \"kubernetes.io/projected/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-kube-api-access-sgt9k\") pod \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\" (UID: \"79fd204f-1eb1-42fe-a6e9-3cbb1825792e\") " Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.267722 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-utilities" (OuterVolumeSpecName: "utilities") pod "79fd204f-1eb1-42fe-a6e9-3cbb1825792e" (UID: "79fd204f-1eb1-42fe-a6e9-3cbb1825792e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.271280 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-kube-api-access-sgt9k" (OuterVolumeSpecName: "kube-api-access-sgt9k") pod "79fd204f-1eb1-42fe-a6e9-3cbb1825792e" (UID: "79fd204f-1eb1-42fe-a6e9-3cbb1825792e"). InnerVolumeSpecName "kube-api-access-sgt9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.326613 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79fd204f-1eb1-42fe-a6e9-3cbb1825792e" (UID: "79fd204f-1eb1-42fe-a6e9-3cbb1825792e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.368976 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.369016 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgt9k\" (UniqueName: \"kubernetes.io/projected/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-kube-api-access-sgt9k\") on node \"crc\" DevicePath \"\"" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.369027 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79fd204f-1eb1-42fe-a6e9-3cbb1825792e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.676889 4718 generic.go:334] "Generic (PLEG): container finished" podID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerID="1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47" exitCode=0 Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.676937 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbmk7" event={"ID":"79fd204f-1eb1-42fe-a6e9-3cbb1825792e","Type":"ContainerDied","Data":"1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47"} Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.676966 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbmk7" event={"ID":"79fd204f-1eb1-42fe-a6e9-3cbb1825792e","Type":"ContainerDied","Data":"63bfa84e2115c9889d9dc59bbf5e566ce23c942d597b20da5fed298ad13344cc"} Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.676985 4718 scope.go:117] "RemoveContainer" containerID="1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.677018 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbmk7" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.709246 4718 scope.go:117] "RemoveContainer" containerID="2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.721174 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pbmk7"] Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.729951 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pbmk7"] Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.736697 4718 scope.go:117] "RemoveContainer" containerID="2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.791644 4718 scope.go:117] "RemoveContainer" containerID="1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47" Nov 23 15:42:17 crc kubenswrapper[4718]: E1123 15:42:17.792294 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47\": container with ID starting with 1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47 not found: ID does not exist" containerID="1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.792334 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47"} err="failed to get container status \"1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47\": rpc error: code = NotFound desc = could not find container \"1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47\": container with ID starting with 1a71275112f8dc96c04e6c1e6586227316743756efaa914fb68515d89fb4ec47 not found: ID does not exist" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.792359 4718 scope.go:117] "RemoveContainer" containerID="2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883" Nov 23 15:42:17 crc kubenswrapper[4718]: E1123 15:42:17.792744 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883\": container with ID starting with 2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883 not found: ID does not exist" containerID="2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.792782 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883"} err="failed to get container status \"2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883\": rpc error: code = NotFound desc = could not find container \"2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883\": container with ID starting with 2665415198b4d471023c7a10f83e1bb0af00e9de10a8e22d41d84bcd5eb45883 not found: ID does not exist" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.792804 4718 scope.go:117] "RemoveContainer" containerID="2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2" Nov 23 15:42:17 crc kubenswrapper[4718]: E1123 15:42:17.793070 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2\": container with ID starting with 2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2 not found: ID does not exist" containerID="2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2" Nov 23 15:42:17 crc kubenswrapper[4718]: I1123 15:42:17.793094 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2"} err="failed to get container status \"2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2\": rpc error: code = NotFound desc = could not find container \"2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2\": container with ID starting with 2895ffdb1ee18d1aedcd83b84b4567ab00d6b4c950c18dfc6484bf4a98920eb2 not found: ID does not exist" Nov 23 15:42:18 crc kubenswrapper[4718]: I1123 15:42:18.452514 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" path="/var/lib/kubelet/pods/79fd204f-1eb1-42fe-a6e9-3cbb1825792e/volumes" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.054025 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.054657 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.054707 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.055991 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74327bd72e539c965b0611b0786967503764bdb42e874d7079114964edd8c8fa"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.056141 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://74327bd72e539c965b0611b0786967503764bdb42e874d7079114964edd8c8fa" gracePeriod=600 Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.349668 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xm2vt/must-gather-mdfh5"] Nov 23 15:42:23 crc kubenswrapper[4718]: E1123 15:42:23.350428 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="extract-content" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350466 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="extract-content" Nov 23 15:42:23 crc kubenswrapper[4718]: E1123 15:42:23.350481 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="extract-utilities" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350491 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="extract-utilities" Nov 23 15:42:23 crc kubenswrapper[4718]: E1123 15:42:23.350508 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="extract-utilities" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350516 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="extract-utilities" Nov 23 15:42:23 crc kubenswrapper[4718]: E1123 15:42:23.350543 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="registry-server" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350552 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="registry-server" Nov 23 15:42:23 crc kubenswrapper[4718]: E1123 15:42:23.350586 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="extract-content" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350593 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="extract-content" Nov 23 15:42:23 crc kubenswrapper[4718]: E1123 15:42:23.350611 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="registry-server" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350619 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="registry-server" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350842 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="79fd204f-1eb1-42fe-a6e9-3cbb1825792e" containerName="registry-server" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.350881 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9d5a44-04b0-4faf-894d-0bb1989a7a61" containerName="registry-server" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.352070 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.354751 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xm2vt"/"openshift-service-ca.crt" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.354983 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xm2vt"/"default-dockercfg-986nv" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.356637 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xm2vt"/"kube-root-ca.crt" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.374916 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xm2vt/must-gather-mdfh5"] Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.487044 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/473edc6e-9b5b-4f07-848d-67153e247ccf-must-gather-output\") pod \"must-gather-mdfh5\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.487137 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hssm\" (UniqueName: \"kubernetes.io/projected/473edc6e-9b5b-4f07-848d-67153e247ccf-kube-api-access-2hssm\") pod \"must-gather-mdfh5\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.589299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/473edc6e-9b5b-4f07-848d-67153e247ccf-must-gather-output\") pod \"must-gather-mdfh5\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.589350 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hssm\" (UniqueName: \"kubernetes.io/projected/473edc6e-9b5b-4f07-848d-67153e247ccf-kube-api-access-2hssm\") pod \"must-gather-mdfh5\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.589901 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/473edc6e-9b5b-4f07-848d-67153e247ccf-must-gather-output\") pod \"must-gather-mdfh5\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.620240 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hssm\" (UniqueName: \"kubernetes.io/projected/473edc6e-9b5b-4f07-848d-67153e247ccf-kube-api-access-2hssm\") pod \"must-gather-mdfh5\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.671000 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.753599 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="74327bd72e539c965b0611b0786967503764bdb42e874d7079114964edd8c8fa" exitCode=0 Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.753651 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"74327bd72e539c965b0611b0786967503764bdb42e874d7079114964edd8c8fa"} Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.753686 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757"} Nov 23 15:42:23 crc kubenswrapper[4718]: I1123 15:42:23.753712 4718 scope.go:117] "RemoveContainer" containerID="b8e095ab1a4cac739a05773185a55193b2b536c32096f0a2d1540f29d1a204b8" Nov 23 15:42:24 crc kubenswrapper[4718]: I1123 15:42:24.228067 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xm2vt/must-gather-mdfh5"] Nov 23 15:42:24 crc kubenswrapper[4718]: I1123 15:42:24.764815 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" event={"ID":"473edc6e-9b5b-4f07-848d-67153e247ccf","Type":"ContainerStarted","Data":"e846f07d3011d0e60d0c0cd43c287e1937bcaf85759d009a324dd81cd39c08fd"} Nov 23 15:42:30 crc kubenswrapper[4718]: I1123 15:42:30.823891 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" event={"ID":"473edc6e-9b5b-4f07-848d-67153e247ccf","Type":"ContainerStarted","Data":"9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834"} Nov 23 15:42:30 crc kubenswrapper[4718]: I1123 15:42:30.841702 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" podStartSLOduration=1.8007352 podStartE2EDuration="7.841683465s" podCreationTimestamp="2025-11-23 15:42:23 +0000 UTC" firstStartedPulling="2025-11-23 15:42:24.234639734 +0000 UTC m=+3395.474259578" lastFinishedPulling="2025-11-23 15:42:30.275587989 +0000 UTC m=+3401.515207843" observedRunningTime="2025-11-23 15:42:30.837930911 +0000 UTC m=+3402.077550755" watchObservedRunningTime="2025-11-23 15:42:30.841683465 +0000 UTC m=+3402.081303309" Nov 23 15:42:31 crc kubenswrapper[4718]: I1123 15:42:31.833721 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" event={"ID":"473edc6e-9b5b-4f07-848d-67153e247ccf","Type":"ContainerStarted","Data":"865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d"} Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.087152 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-5ggk9"] Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.088955 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.226671 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8pc\" (UniqueName: \"kubernetes.io/projected/bf81f567-fe56-47b6-a12f-c823a1a856d6-kube-api-access-sh8pc\") pod \"crc-debug-5ggk9\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.226804 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf81f567-fe56-47b6-a12f-c823a1a856d6-host\") pod \"crc-debug-5ggk9\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.328894 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh8pc\" (UniqueName: \"kubernetes.io/projected/bf81f567-fe56-47b6-a12f-c823a1a856d6-kube-api-access-sh8pc\") pod \"crc-debug-5ggk9\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.328967 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf81f567-fe56-47b6-a12f-c823a1a856d6-host\") pod \"crc-debug-5ggk9\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.329126 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf81f567-fe56-47b6-a12f-c823a1a856d6-host\") pod \"crc-debug-5ggk9\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.353137 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh8pc\" (UniqueName: \"kubernetes.io/projected/bf81f567-fe56-47b6-a12f-c823a1a856d6-kube-api-access-sh8pc\") pod \"crc-debug-5ggk9\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.407806 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:42:34 crc kubenswrapper[4718]: I1123 15:42:34.875831 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" event={"ID":"bf81f567-fe56-47b6-a12f-c823a1a856d6","Type":"ContainerStarted","Data":"5dc72ad0b7dcefbcce80a80f20822ce80aa7f40f5357ac7d007188ce75347428"} Nov 23 15:43:05 crc kubenswrapper[4718]: E1123 15:43:05.216175 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 23 15:43:05 crc kubenswrapper[4718]: E1123 15:43:05.216995 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh8pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-5ggk9_openshift-must-gather-xm2vt(bf81f567-fe56-47b6-a12f-c823a1a856d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 15:43:05 crc kubenswrapper[4718]: E1123 15:43:05.218196 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" podUID="bf81f567-fe56-47b6-a12f-c823a1a856d6" Nov 23 15:43:06 crc kubenswrapper[4718]: E1123 15:43:06.152484 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" podUID="bf81f567-fe56-47b6-a12f-c823a1a856d6" Nov 23 15:43:23 crc kubenswrapper[4718]: I1123 15:43:23.296295 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" event={"ID":"bf81f567-fe56-47b6-a12f-c823a1a856d6","Type":"ContainerStarted","Data":"0b59a02e4ab689f4f2dc2bcc519dbf5d477e77f556946d0d705fba2e4ced2eeb"} Nov 23 15:43:23 crc kubenswrapper[4718]: I1123 15:43:23.315139 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" podStartSLOduration=1.649088496 podStartE2EDuration="49.315119889s" podCreationTimestamp="2025-11-23 15:42:34 +0000 UTC" firstStartedPulling="2025-11-23 15:42:34.481135349 +0000 UTC m=+3405.720755193" lastFinishedPulling="2025-11-23 15:43:22.147166742 +0000 UTC m=+3453.386786586" observedRunningTime="2025-11-23 15:43:23.307952741 +0000 UTC m=+3454.547572585" watchObservedRunningTime="2025-11-23 15:43:23.315119889 +0000 UTC m=+3454.554739733" Nov 23 15:44:09 crc kubenswrapper[4718]: I1123 15:44:09.783316 4718 generic.go:334] "Generic (PLEG): container finished" podID="bf81f567-fe56-47b6-a12f-c823a1a856d6" containerID="0b59a02e4ab689f4f2dc2bcc519dbf5d477e77f556946d0d705fba2e4ced2eeb" exitCode=0 Nov 23 15:44:09 crc kubenswrapper[4718]: I1123 15:44:09.783420 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" event={"ID":"bf81f567-fe56-47b6-a12f-c823a1a856d6","Type":"ContainerDied","Data":"0b59a02e4ab689f4f2dc2bcc519dbf5d477e77f556946d0d705fba2e4ced2eeb"} Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.907993 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.944788 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-5ggk9"] Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.952450 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-5ggk9"] Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.972655 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh8pc\" (UniqueName: \"kubernetes.io/projected/bf81f567-fe56-47b6-a12f-c823a1a856d6-kube-api-access-sh8pc\") pod \"bf81f567-fe56-47b6-a12f-c823a1a856d6\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.972837 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf81f567-fe56-47b6-a12f-c823a1a856d6-host\") pod \"bf81f567-fe56-47b6-a12f-c823a1a856d6\" (UID: \"bf81f567-fe56-47b6-a12f-c823a1a856d6\") " Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.972975 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf81f567-fe56-47b6-a12f-c823a1a856d6-host" (OuterVolumeSpecName: "host") pod "bf81f567-fe56-47b6-a12f-c823a1a856d6" (UID: "bf81f567-fe56-47b6-a12f-c823a1a856d6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.973536 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf81f567-fe56-47b6-a12f-c823a1a856d6-host\") on node \"crc\" DevicePath \"\"" Nov 23 15:44:10 crc kubenswrapper[4718]: I1123 15:44:10.980515 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf81f567-fe56-47b6-a12f-c823a1a856d6-kube-api-access-sh8pc" (OuterVolumeSpecName: "kube-api-access-sh8pc") pod "bf81f567-fe56-47b6-a12f-c823a1a856d6" (UID: "bf81f567-fe56-47b6-a12f-c823a1a856d6"). InnerVolumeSpecName "kube-api-access-sh8pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:44:11 crc kubenswrapper[4718]: I1123 15:44:11.075453 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh8pc\" (UniqueName: \"kubernetes.io/projected/bf81f567-fe56-47b6-a12f-c823a1a856d6-kube-api-access-sh8pc\") on node \"crc\" DevicePath \"\"" Nov 23 15:44:11 crc kubenswrapper[4718]: I1123 15:44:11.807695 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc72ad0b7dcefbcce80a80f20822ce80aa7f40f5357ac7d007188ce75347428" Nov 23 15:44:11 crc kubenswrapper[4718]: I1123 15:44:11.807738 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-5ggk9" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.148711 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-rqnfm"] Nov 23 15:44:12 crc kubenswrapper[4718]: E1123 15:44:12.149079 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf81f567-fe56-47b6-a12f-c823a1a856d6" containerName="container-00" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.149091 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf81f567-fe56-47b6-a12f-c823a1a856d6" containerName="container-00" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.149298 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf81f567-fe56-47b6-a12f-c823a1a856d6" containerName="container-00" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.149898 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.194524 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-host\") pod \"crc-debug-rqnfm\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.194877 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7fw\" (UniqueName: \"kubernetes.io/projected/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-kube-api-access-vx7fw\") pod \"crc-debug-rqnfm\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.297628 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7fw\" (UniqueName: \"kubernetes.io/projected/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-kube-api-access-vx7fw\") pod \"crc-debug-rqnfm\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.297832 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-host\") pod \"crc-debug-rqnfm\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.298072 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-host\") pod \"crc-debug-rqnfm\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.328478 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7fw\" (UniqueName: \"kubernetes.io/projected/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-kube-api-access-vx7fw\") pod \"crc-debug-rqnfm\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.453856 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf81f567-fe56-47b6-a12f-c823a1a856d6" path="/var/lib/kubelet/pods/bf81f567-fe56-47b6-a12f-c823a1a856d6/volumes" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.472405 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.821373 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" event={"ID":"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a","Type":"ContainerStarted","Data":"0de661cd4f498f3915e130bfec79ccfe52956e6a0249d54b83e5a6c684ab3525"} Nov 23 15:44:12 crc kubenswrapper[4718]: I1123 15:44:12.821430 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" event={"ID":"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a","Type":"ContainerStarted","Data":"82cb78acb0dc8172da14ee9ccdf379d2c9f3996360dcd1f17e28310e09022cbd"} Nov 23 15:44:13 crc kubenswrapper[4718]: I1123 15:44:13.341279 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-rqnfm"] Nov 23 15:44:13 crc kubenswrapper[4718]: I1123 15:44:13.350839 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-rqnfm"] Nov 23 15:44:13 crc kubenswrapper[4718]: I1123 15:44:13.833531 4718 generic.go:334] "Generic (PLEG): container finished" podID="8b14b6c0-6ad2-494a-a828-6a3d9b685a4a" containerID="0de661cd4f498f3915e130bfec79ccfe52956e6a0249d54b83e5a6c684ab3525" exitCode=0 Nov 23 15:44:13 crc kubenswrapper[4718]: I1123 15:44:13.927183 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.029000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7fw\" (UniqueName: \"kubernetes.io/projected/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-kube-api-access-vx7fw\") pod \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.029153 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-host\") pod \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\" (UID: \"8b14b6c0-6ad2-494a-a828-6a3d9b685a4a\") " Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.029412 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-host" (OuterVolumeSpecName: "host") pod "8b14b6c0-6ad2-494a-a828-6a3d9b685a4a" (UID: "8b14b6c0-6ad2-494a-a828-6a3d9b685a4a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.029969 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-host\") on node \"crc\" DevicePath \"\"" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.038160 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-kube-api-access-vx7fw" (OuterVolumeSpecName: "kube-api-access-vx7fw") pod "8b14b6c0-6ad2-494a-a828-6a3d9b685a4a" (UID: "8b14b6c0-6ad2-494a-a828-6a3d9b685a4a"). InnerVolumeSpecName "kube-api-access-vx7fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.131855 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7fw\" (UniqueName: \"kubernetes.io/projected/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a-kube-api-access-vx7fw\") on node \"crc\" DevicePath \"\"" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.459669 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b14b6c0-6ad2-494a-a828-6a3d9b685a4a" path="/var/lib/kubelet/pods/8b14b6c0-6ad2-494a-a828-6a3d9b685a4a/volumes" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.524333 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-q9g4s"] Nov 23 15:44:14 crc kubenswrapper[4718]: E1123 15:44:14.524805 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b14b6c0-6ad2-494a-a828-6a3d9b685a4a" containerName="container-00" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.524830 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b14b6c0-6ad2-494a-a828-6a3d9b685a4a" containerName="container-00" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.525077 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b14b6c0-6ad2-494a-a828-6a3d9b685a4a" containerName="container-00" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.525721 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.543645 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crq7p\" (UniqueName: \"kubernetes.io/projected/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-kube-api-access-crq7p\") pod \"crc-debug-q9g4s\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.544006 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-host\") pod \"crc-debug-q9g4s\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.646221 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crq7p\" (UniqueName: \"kubernetes.io/projected/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-kube-api-access-crq7p\") pod \"crc-debug-q9g4s\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.646313 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-host\") pod \"crc-debug-q9g4s\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.646511 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-host\") pod \"crc-debug-q9g4s\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.662815 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crq7p\" (UniqueName: \"kubernetes.io/projected/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-kube-api-access-crq7p\") pod \"crc-debug-q9g4s\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.843189 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.843555 4718 scope.go:117] "RemoveContainer" containerID="0de661cd4f498f3915e130bfec79ccfe52956e6a0249d54b83e5a6c684ab3525" Nov 23 15:44:14 crc kubenswrapper[4718]: I1123 15:44:14.843593 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-rqnfm" Nov 23 15:44:14 crc kubenswrapper[4718]: W1123 15:44:14.881099 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c89ef69_b1b6_4f15_bc91_f2b7a0641384.slice/crio-90c88aabb939f5104349e6dcc3d1c48c1f3ac69eaf39e7969a81a1e4417815d6 WatchSource:0}: Error finding container 90c88aabb939f5104349e6dcc3d1c48c1f3ac69eaf39e7969a81a1e4417815d6: Status 404 returned error can't find the container with id 90c88aabb939f5104349e6dcc3d1c48c1f3ac69eaf39e7969a81a1e4417815d6 Nov 23 15:44:15 crc kubenswrapper[4718]: I1123 15:44:15.860937 4718 generic.go:334] "Generic (PLEG): container finished" podID="1c89ef69-b1b6-4f15-bc91-f2b7a0641384" containerID="437a078927d3cc5f5070d795312a772f25f4eb5a21019ca5d2a433689ee9a138" exitCode=0 Nov 23 15:44:15 crc kubenswrapper[4718]: I1123 15:44:15.860987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" event={"ID":"1c89ef69-b1b6-4f15-bc91-f2b7a0641384","Type":"ContainerDied","Data":"437a078927d3cc5f5070d795312a772f25f4eb5a21019ca5d2a433689ee9a138"} Nov 23 15:44:15 crc kubenswrapper[4718]: I1123 15:44:15.861548 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" event={"ID":"1c89ef69-b1b6-4f15-bc91-f2b7a0641384","Type":"ContainerStarted","Data":"90c88aabb939f5104349e6dcc3d1c48c1f3ac69eaf39e7969a81a1e4417815d6"} Nov 23 15:44:15 crc kubenswrapper[4718]: I1123 15:44:15.911401 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-q9g4s"] Nov 23 15:44:15 crc kubenswrapper[4718]: I1123 15:44:15.922579 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xm2vt/crc-debug-q9g4s"] Nov 23 15:44:16 crc kubenswrapper[4718]: I1123 15:44:16.981491 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.091375 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-host\") pod \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.091580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-host" (OuterVolumeSpecName: "host") pod "1c89ef69-b1b6-4f15-bc91-f2b7a0641384" (UID: "1c89ef69-b1b6-4f15-bc91-f2b7a0641384"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.091657 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crq7p\" (UniqueName: \"kubernetes.io/projected/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-kube-api-access-crq7p\") pod \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\" (UID: \"1c89ef69-b1b6-4f15-bc91-f2b7a0641384\") " Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.092107 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-host\") on node \"crc\" DevicePath \"\"" Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.100747 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-kube-api-access-crq7p" (OuterVolumeSpecName: "kube-api-access-crq7p") pod "1c89ef69-b1b6-4f15-bc91-f2b7a0641384" (UID: "1c89ef69-b1b6-4f15-bc91-f2b7a0641384"). InnerVolumeSpecName "kube-api-access-crq7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.193457 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crq7p\" (UniqueName: \"kubernetes.io/projected/1c89ef69-b1b6-4f15-bc91-f2b7a0641384-kube-api-access-crq7p\") on node \"crc\" DevicePath \"\"" Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.884790 4718 scope.go:117] "RemoveContainer" containerID="437a078927d3cc5f5070d795312a772f25f4eb5a21019ca5d2a433689ee9a138" Nov 23 15:44:17 crc kubenswrapper[4718]: I1123 15:44:17.884803 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/crc-debug-q9g4s" Nov 23 15:44:18 crc kubenswrapper[4718]: I1123 15:44:18.451891 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c89ef69-b1b6-4f15-bc91-f2b7a0641384" path="/var/lib/kubelet/pods/1c89ef69-b1b6-4f15-bc91-f2b7a0641384/volumes" Nov 23 15:44:23 crc kubenswrapper[4718]: I1123 15:44:23.053558 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:44:23 crc kubenswrapper[4718]: I1123 15:44:23.054055 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.334137 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c6d99969d-lw2d7_cdc09b60-8801-4296-98e6-94a2e5ac8697/barbican-api/0.log" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.495404 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c6d99969d-lw2d7_cdc09b60-8801-4296-98e6-94a2e5ac8697/barbican-api-log/0.log" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.577903 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74dd8678b6-fs5mq_effa4eb1-dc32-4d96-8f19-0eaae852f9a1/barbican-keystone-listener/0.log" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.613971 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74dd8678b6-fs5mq_effa4eb1-dc32-4d96-8f19-0eaae852f9a1/barbican-keystone-listener-log/0.log" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.777154 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-95d8d74d7-xhwwk_4fcdc6b9-b082-4a9d-b905-7978d941b38f/barbican-worker-log/0.log" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.782154 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-95d8d74d7-xhwwk_4fcdc6b9-b082-4a9d-b905-7978d941b38f/barbican-worker/0.log" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.993098 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/ceilometer-central-agent/0.log" Nov 23 15:44:32 crc kubenswrapper[4718]: I1123 15:44:32.997845 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z_ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.093754 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/ceilometer-notification-agent/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.168622 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/sg-core/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.178075 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/proxy-httpd/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.317208 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3a35d79c-43f8-4fbb-822d-d4b42a332068/cinder-api/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.367505 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3a35d79c-43f8-4fbb-822d-d4b42a332068/cinder-api-log/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.416515 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1ac027eb-4d7f-4d21-8689-9ed48cd5b35b/cinder-scheduler/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.515604 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1ac027eb-4d7f-4d21-8689-9ed48cd5b35b/probe/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.663668 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b_843c6972-d172-42d0-9c7c-bacd49de3307/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.714555 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2_4785d6c3-899e-4bfd-9333-b4493a1cae09/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.837344 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-44tvh_680f2aea-fad7-47b3-aabb-06c149297a03/init/0.log" Nov 23 15:44:33 crc kubenswrapper[4718]: I1123 15:44:33.999460 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-44tvh_680f2aea-fad7-47b3-aabb-06c149297a03/init/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.063782 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-44tvh_680f2aea-fad7-47b3-aabb-06c149297a03/dnsmasq-dns/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.105118 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-84twb_253a44f7-f768-49ca-88c1-de87b9cbcbbb/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.246225 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34d380e0-2ed9-45ce-9c05-85b138e3a99a/glance-log/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.269795 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34d380e0-2ed9-45ce-9c05-85b138e3a99a/glance-httpd/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.406863 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_09c60904-5047-4206-97a2-57b5c85a22d5/glance-httpd/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.443712 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_09c60904-5047-4206-97a2-57b5c85a22d5/glance-log/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.633524 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65c9478d8d-nxsfl_ebb73efe-fe18-4507-b723-d3dbf1d8ed91/horizon/0.log" Nov 23 15:44:34 crc kubenswrapper[4718]: I1123 15:44:34.955815 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx_e04ec17a-ea46-47b8-ac60-1e2a22849b63/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:35 crc kubenswrapper[4718]: I1123 15:44:35.013696 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65c9478d8d-nxsfl_ebb73efe-fe18-4507-b723-d3dbf1d8ed91/horizon-log/0.log" Nov 23 15:44:35 crc kubenswrapper[4718]: I1123 15:44:35.193467 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-z276b_6bc22e89-875e-4f3a-93b3-5e738e897c23/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:35 crc kubenswrapper[4718]: I1123 15:44:35.278852 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_34c11acb-21ce-4e87-baab-f6f765d508cf/kube-state-metrics/0.log" Nov 23 15:44:35 crc kubenswrapper[4718]: I1123 15:44:35.393643 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-844cdbd5f8-ptmjk_e7d1c35f-1d40-4385-9579-dc7477cc104d/keystone-api/0.log" Nov 23 15:44:35 crc kubenswrapper[4718]: I1123 15:44:35.594930 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj_e50f1c92-4d4a-4a83-bf46-c8268c34d373/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:35 crc kubenswrapper[4718]: I1123 15:44:35.906664 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6566df567c-72brl_14545901-b770-4d66-8692-51937e97d24a/neutron-httpd/0.log" Nov 23 15:44:36 crc kubenswrapper[4718]: I1123 15:44:36.059686 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6566df567c-72brl_14545901-b770-4d66-8692-51937e97d24a/neutron-api/0.log" Nov 23 15:44:36 crc kubenswrapper[4718]: I1123 15:44:36.304025 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw_a7538df6-0093-40e0-b2da-59c2273f1f0f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:36 crc kubenswrapper[4718]: I1123 15:44:36.743018 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b70088cf-3265-44f6-b723-5c5317dd1f54/nova-cell0-conductor-conductor/0.log" Nov 23 15:44:36 crc kubenswrapper[4718]: I1123 15:44:36.758824 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1bc63d6b-fef8-4086-bdf2-56e1ecb469bd/nova-api-log/0.log" Nov 23 15:44:36 crc kubenswrapper[4718]: I1123 15:44:36.945758 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1bc63d6b-fef8-4086-bdf2-56e1ecb469bd/nova-api-api/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.046759 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_427b4910-7814-4a43-8b0a-b91b05aed240/nova-cell1-conductor-conductor/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.086581 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f5538cf1-7653-415e-8b15-851291e281f1/nova-cell1-novncproxy-novncproxy/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.291261 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-hl7dp_b4aa1c8b-75a3-4a8f-98e1-25456c27560f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.461362 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8c3e138f-87e1-4b20-8fba-0fa931f9e09e/nova-metadata-log/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.732877 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_da1e3b17-14ea-456e-a694-073e8fd4edaf/mysql-bootstrap/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.733556 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8446752b-4a28-452c-8df8-6ac8558b7754/nova-scheduler-scheduler/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.927036 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_da1e3b17-14ea-456e-a694-073e8fd4edaf/mysql-bootstrap/0.log" Nov 23 15:44:37 crc kubenswrapper[4718]: I1123 15:44:37.968535 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_da1e3b17-14ea-456e-a694-073e8fd4edaf/galera/0.log" Nov 23 15:44:38 crc kubenswrapper[4718]: I1123 15:44:38.146989 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_adbd2274-f81b-4930-85c1-eec8a7a3790d/mysql-bootstrap/0.log" Nov 23 15:44:38 crc kubenswrapper[4718]: I1123 15:44:38.296863 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_adbd2274-f81b-4930-85c1-eec8a7a3790d/mysql-bootstrap/0.log" Nov 23 15:44:38 crc kubenswrapper[4718]: I1123 15:44:38.328328 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_adbd2274-f81b-4930-85c1-eec8a7a3790d/galera/0.log" Nov 23 15:44:38 crc kubenswrapper[4718]: I1123 15:44:38.540311 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a2720257-17da-4635-bd8c-2d65b9e8b9f0/openstackclient/0.log" Nov 23 15:44:38 crc kubenswrapper[4718]: I1123 15:44:38.623961 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8c3e138f-87e1-4b20-8fba-0fa931f9e09e/nova-metadata-metadata/0.log" Nov 23 15:44:38 crc kubenswrapper[4718]: I1123 15:44:38.807309 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dc8hm_65b3425b-bedb-4274-a600-091b1910a2d7/ovn-controller/0.log" Nov 23 15:44:38 crc kubenswrapper[4718]: I1123 15:44:38.996537 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jfjhx_d8ae0875-d71a-40f8-8db0-6af6b7acd60f/openstack-network-exporter/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.059923 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovsdb-server-init/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.270055 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovsdb-server/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.272732 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovsdb-server-init/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.330985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovs-vswitchd/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.513493 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-b7ghx_cea34a47-7094-4217-8086-0c4d9ec9d23f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.626906 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f1696a6b-d5a7-403f-b9d0-168c0e42a937/ovn-northd/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.649162 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f1696a6b-d5a7-403f-b9d0-168c0e42a937/openstack-network-exporter/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.762572 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0d21970-a68c-4d2b-bbcb-18ae83284d95/openstack-network-exporter/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.869755 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0d21970-a68c-4d2b-bbcb-18ae83284d95/ovsdbserver-nb/0.log" Nov 23 15:44:39 crc kubenswrapper[4718]: I1123 15:44:39.999749 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_33d2daa7-22e5-4713-9cc1-3d976c1559e3/openstack-network-exporter/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.105939 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_33d2daa7-22e5-4713-9cc1-3d976c1559e3/ovsdbserver-sb/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.243388 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7fdf4df4d-qlcjn_af2372a3-1e0c-4668-ab25-cfb8616a6d1b/placement-api/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.293858 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7fdf4df4d-qlcjn_af2372a3-1e0c-4668-ab25-cfb8616a6d1b/placement-log/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.358687 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_133d2692-e5ce-4298-89d3-6fc11ab5f0b3/setup-container/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.525916 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_133d2692-e5ce-4298-89d3-6fc11ab5f0b3/setup-container/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.555352 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_133d2692-e5ce-4298-89d3-6fc11ab5f0b3/rabbitmq/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.578163 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6cd89fb4-b66f-4df5-940d-fe185bd5e039/setup-container/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.814322 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6cd89fb4-b66f-4df5-940d-fe185bd5e039/rabbitmq/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.816694 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6cd89fb4-b66f-4df5-940d-fe185bd5e039/setup-container/0.log" Nov 23 15:44:40 crc kubenswrapper[4718]: I1123 15:44:40.903851 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr_ce6b5147-989d-4b02-987b-1b6b3c4d9460/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.102537 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-km6kx_74d2f450-1dcb-42ee-9b6f-da7389ddc9ce/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.184079 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h_834d3c5a-a503-42a6-a71d-8e00fe358ec6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.355403 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4ngvk_a4cabb6a-7840-4f18-af7f-03bfffcb4c11/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.418263 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-lsf9c_e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485/ssh-known-hosts-edpm-deployment/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.593857 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-589b8777c9-j8mvv_62536478-1337-4bad-b5e3-77cf6dd4d54b/proxy-server/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.682312 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-589b8777c9-j8mvv_62536478-1337-4bad-b5e3-77cf6dd4d54b/proxy-httpd/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.816858 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sx654_79cae030-375c-4b0e-9dfc-e823f922196b/swift-ring-rebalance/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.869075 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-reaper/0.log" Nov 23 15:44:41 crc kubenswrapper[4718]: I1123 15:44:41.871715 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-auditor/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.044746 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-server/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.049464 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-replicator/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.112840 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-auditor/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.188905 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-replicator/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.311769 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-auditor/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.333616 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-updater/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.365243 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-server/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.436357 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-expirer/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.538782 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-updater/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.555251 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-server/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.621628 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-replicator/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.671711 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/rsync/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.758912 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/swift-recon-cron/0.log" Nov 23 15:44:42 crc kubenswrapper[4718]: I1123 15:44:42.929985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-n6v68_34a8617a-3a87-48ad-b752-b324eaac4afe/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:43 crc kubenswrapper[4718]: I1123 15:44:43.002254 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2053f5ea-ae54-4b1d-951f-2355f69f1062/tempest-tests-tempest-tests-runner/0.log" Nov 23 15:44:43 crc kubenswrapper[4718]: I1123 15:44:43.111290 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_eebef478-ad56-44b6-8ecf-20cc943f86b3/test-operator-logs-container/0.log" Nov 23 15:44:43 crc kubenswrapper[4718]: I1123 15:44:43.223416 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m_a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:44:51 crc kubenswrapper[4718]: I1123 15:44:51.700810 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b5324d05-32a6-4859-9288-de1f3bd9389d/memcached/0.log" Nov 23 15:44:53 crc kubenswrapper[4718]: I1123 15:44:53.053074 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:44:53 crc kubenswrapper[4718]: I1123 15:44:53.053387 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.183170 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m"] Nov 23 15:45:00 crc kubenswrapper[4718]: E1123 15:45:00.185215 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c89ef69-b1b6-4f15-bc91-f2b7a0641384" containerName="container-00" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.185234 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c89ef69-b1b6-4f15-bc91-f2b7a0641384" containerName="container-00" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.185459 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c89ef69-b1b6-4f15-bc91-f2b7a0641384" containerName="container-00" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.186184 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.188056 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.189315 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.194249 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m"] Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.236122 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-config-volume\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.236188 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qttqv\" (UniqueName: \"kubernetes.io/projected/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-kube-api-access-qttqv\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.236424 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-secret-volume\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.338207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-secret-volume\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.338274 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-config-volume\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.338302 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qttqv\" (UniqueName: \"kubernetes.io/projected/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-kube-api-access-qttqv\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.339395 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-config-volume\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.352305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-secret-volume\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.365065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qttqv\" (UniqueName: \"kubernetes.io/projected/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-kube-api-access-qttqv\") pod \"collect-profiles-29398545-6t59m\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.505950 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:00 crc kubenswrapper[4718]: I1123 15:45:00.928250 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m"] Nov 23 15:45:01 crc kubenswrapper[4718]: I1123 15:45:01.255563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" event={"ID":"37b76158-28c3-4d2a-a4ba-cd8baee34e3d","Type":"ContainerStarted","Data":"0bf83ffac07a9e03cd79f06439c565a17ea25e666c1c2e3fa8e80a967894ffd1"} Nov 23 15:45:01 crc kubenswrapper[4718]: I1123 15:45:01.255914 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" event={"ID":"37b76158-28c3-4d2a-a4ba-cd8baee34e3d","Type":"ContainerStarted","Data":"059acbe26bb748af13d890f2fb175dd6f1b50d4d8c0e76d39c43b1ab6c7e01e8"} Nov 23 15:45:01 crc kubenswrapper[4718]: I1123 15:45:01.274040 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" podStartSLOduration=1.274022448 podStartE2EDuration="1.274022448s" podCreationTimestamp="2025-11-23 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:45:01.27333564 +0000 UTC m=+3552.512955484" watchObservedRunningTime="2025-11-23 15:45:01.274022448 +0000 UTC m=+3552.513642292" Nov 23 15:45:02 crc kubenswrapper[4718]: I1123 15:45:02.271710 4718 generic.go:334] "Generic (PLEG): container finished" podID="37b76158-28c3-4d2a-a4ba-cd8baee34e3d" containerID="0bf83ffac07a9e03cd79f06439c565a17ea25e666c1c2e3fa8e80a967894ffd1" exitCode=0 Nov 23 15:45:02 crc kubenswrapper[4718]: I1123 15:45:02.271749 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" event={"ID":"37b76158-28c3-4d2a-a4ba-cd8baee34e3d","Type":"ContainerDied","Data":"0bf83ffac07a9e03cd79f06439c565a17ea25e666c1c2e3fa8e80a967894ffd1"} Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.646698 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.704999 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-secret-volume\") pod \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.705153 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qttqv\" (UniqueName: \"kubernetes.io/projected/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-kube-api-access-qttqv\") pod \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.705393 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-config-volume\") pod \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\" (UID: \"37b76158-28c3-4d2a-a4ba-cd8baee34e3d\") " Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.706684 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-config-volume" (OuterVolumeSpecName: "config-volume") pod "37b76158-28c3-4d2a-a4ba-cd8baee34e3d" (UID: "37b76158-28c3-4d2a-a4ba-cd8baee34e3d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.712332 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37b76158-28c3-4d2a-a4ba-cd8baee34e3d" (UID: "37b76158-28c3-4d2a-a4ba-cd8baee34e3d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.714596 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-kube-api-access-qttqv" (OuterVolumeSpecName: "kube-api-access-qttqv") pod "37b76158-28c3-4d2a-a4ba-cd8baee34e3d" (UID: "37b76158-28c3-4d2a-a4ba-cd8baee34e3d"). InnerVolumeSpecName "kube-api-access-qttqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.807615 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.807660 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qttqv\" (UniqueName: \"kubernetes.io/projected/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-kube-api-access-qttqv\") on node \"crc\" DevicePath \"\"" Nov 23 15:45:03 crc kubenswrapper[4718]: I1123 15:45:03.807670 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b76158-28c3-4d2a-a4ba-cd8baee34e3d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 15:45:04 crc kubenswrapper[4718]: I1123 15:45:04.292697 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" event={"ID":"37b76158-28c3-4d2a-a4ba-cd8baee34e3d","Type":"ContainerDied","Data":"059acbe26bb748af13d890f2fb175dd6f1b50d4d8c0e76d39c43b1ab6c7e01e8"} Nov 23 15:45:04 crc kubenswrapper[4718]: I1123 15:45:04.292743 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059acbe26bb748af13d890f2fb175dd6f1b50d4d8c0e76d39c43b1ab6c7e01e8" Nov 23 15:45:04 crc kubenswrapper[4718]: I1123 15:45:04.292765 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398545-6t59m" Nov 23 15:45:04 crc kubenswrapper[4718]: I1123 15:45:04.351623 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq"] Nov 23 15:45:04 crc kubenswrapper[4718]: I1123 15:45:04.359495 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398500-qk4nq"] Nov 23 15:45:04 crc kubenswrapper[4718]: I1123 15:45:04.457141 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae77b8e6-84d5-4680-a816-dced35246342" path="/var/lib/kubelet/pods/ae77b8e6-84d5-4680-a816-dced35246342/volumes" Nov 23 15:45:05 crc kubenswrapper[4718]: I1123 15:45:05.497225 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-kwnhv_e402a0ac-7f35-4bab-9948-b664c0ef9636/kube-rbac-proxy/0.log" Nov 23 15:45:05 crc kubenswrapper[4718]: I1123 15:45:05.648823 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-kwnhv_e402a0ac-7f35-4bab-9948-b664c0ef9636/manager/0.log" Nov 23 15:45:05 crc kubenswrapper[4718]: I1123 15:45:05.713936 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-k82wv_b5aad852-89aa-459e-8771-50ef010620ef/manager/0.log" Nov 23 15:45:05 crc kubenswrapper[4718]: I1123 15:45:05.714813 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-k82wv_b5aad852-89aa-459e-8771-50ef010620ef/kube-rbac-proxy/0.log" Nov 23 15:45:05 crc kubenswrapper[4718]: I1123 15:45:05.913971 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-v5dn9_ce59b10a-2110-44a2-9489-b1e06f6a1032/manager/0.log" Nov 23 15:45:05 crc kubenswrapper[4718]: I1123 15:45:05.940887 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-v5dn9_ce59b10a-2110-44a2-9489-b1e06f6a1032/kube-rbac-proxy/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.046521 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/util/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.239762 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/util/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.259656 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/pull/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.259856 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/pull/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.432131 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/util/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.453739 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/extract/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.471858 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/pull/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.655031 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-h4hzc_960d1cfd-fc93-466c-8590-723c68c0bc05/kube-rbac-proxy/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.676318 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-h4hzc_960d1cfd-fc93-466c-8590-723c68c0bc05/manager/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.740990 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-k88hn_1b5f1764-1a63-4fda-988c-49a8bc17fe79/kube-rbac-proxy/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.810647 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-k88hn_1b5f1764-1a63-4fda-988c-49a8bc17fe79/manager/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.868094 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-8trcx_f4618913-9a14-4f47-89ec-9c4b0a931434/kube-rbac-proxy/0.log" Nov 23 15:45:06 crc kubenswrapper[4718]: I1123 15:45:06.937596 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-8trcx_f4618913-9a14-4f47-89ec-9c4b0a931434/manager/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.053908 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-h8b4l_b72e1603-d77f-4edc-87a2-3cc5469620fe/kube-rbac-proxy/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.202550 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-rw4vq_06302b9c-68a3-4b48-88d7-cc0885ca0156/kube-rbac-proxy/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.234129 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-h8b4l_b72e1603-d77f-4edc-87a2-3cc5469620fe/manager/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.280876 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-rw4vq_06302b9c-68a3-4b48-88d7-cc0885ca0156/manager/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.450919 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-4g92d_7cc68ab9-c26a-437b-adcd-977eb063fe25/manager/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.463034 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-4g92d_7cc68ab9-c26a-437b-adcd-977eb063fe25/kube-rbac-proxy/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.554528 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-47csv_020fa89c-9d76-439c-aee1-0843636d4469/kube-rbac-proxy/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.675280 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-47csv_020fa89c-9d76-439c-aee1-0843636d4469/manager/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.677434 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-6785w_0d78d642-939c-47e3-8d60-665dff178d44/kube-rbac-proxy/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.787923 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-6785w_0d78d642-939c-47e3-8d60-665dff178d44/manager/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.882013 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-g72r5_0b0e1ffa-6dff-4523-911c-ad0744bd9153/kube-rbac-proxy/0.log" Nov 23 15:45:07 crc kubenswrapper[4718]: I1123 15:45:07.926493 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-g72r5_0b0e1ffa-6dff-4523-911c-ad0744bd9153/manager/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.013982 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-q9nlr_934a178a-2178-4c2d-bda8-9bb817f78644/kube-rbac-proxy/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.118023 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-q9nlr_934a178a-2178-4c2d-bda8-9bb817f78644/manager/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.205080 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-kpg5q_968c85cb-d53b-40e8-9651-7127fc58f61a/kube-rbac-proxy/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.224985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-kpg5q_968c85cb-d53b-40e8-9651-7127fc58f61a/manager/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.305403 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk_ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d/kube-rbac-proxy/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.388040 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk_ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d/manager/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.506694 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-669b8498dc-8hbzm_2f871886-2351-4861-a1d6-3f7711fa936e/kube-rbac-proxy/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.644856 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-597d69585c-9tk5p_5b1ab78b-6600-4c1f-a302-f3b0369892c2/kube-rbac-proxy/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.900389 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j47c5_6fbfe838-c27e-484d-a610-882fbb719e14/registry-server/0.log" Nov 23 15:45:08 crc kubenswrapper[4718]: I1123 15:45:08.938918 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-597d69585c-9tk5p_5b1ab78b-6600-4c1f-a302-f3b0369892c2/operator/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.077023 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-5wtcx_306074ad-d60e-41a2-975b-901d8874be23/kube-rbac-proxy/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.220014 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-5wtcx_306074ad-d60e-41a2-975b-901d8874be23/manager/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.445814 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-7wn4t_9d46e777-1d50-42a0-b20f-a24b155a0e43/manager/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.455389 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-7wn4t_9d46e777-1d50-42a0-b20f-a24b155a0e43/kube-rbac-proxy/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.544934 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb_76e89747-f3cb-45cd-beff-22193095b455/operator/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.709136 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-p7dq2_313f2889-e11a-440a-8358-612780f4a348/kube-rbac-proxy/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.710464 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-669b8498dc-8hbzm_2f871886-2351-4861-a1d6-3f7711fa936e/manager/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.768323 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-p7dq2_313f2889-e11a-440a-8358-612780f4a348/manager/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.912678 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-qtpps_424e5dfa-98a5-480c-aeb9-8f279b2fdee4/kube-rbac-proxy/0.log" Nov 23 15:45:09 crc kubenswrapper[4718]: I1123 15:45:09.990886 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-qtpps_424e5dfa-98a5-480c-aeb9-8f279b2fdee4/manager/0.log" Nov 23 15:45:10 crc kubenswrapper[4718]: I1123 15:45:10.010871 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kt77s_8e9db0b8-bd2b-45fa-8105-2524e81bcd70/kube-rbac-proxy/0.log" Nov 23 15:45:10 crc kubenswrapper[4718]: I1123 15:45:10.072492 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kt77s_8e9db0b8-bd2b-45fa-8105-2524e81bcd70/manager/0.log" Nov 23 15:45:10 crc kubenswrapper[4718]: I1123 15:45:10.173276 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-bfnhk_e3358e41-4842-4768-8235-96a8166d43b0/kube-rbac-proxy/0.log" Nov 23 15:45:10 crc kubenswrapper[4718]: I1123 15:45:10.202725 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-bfnhk_e3358e41-4842-4768-8235-96a8166d43b0/manager/0.log" Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.052679 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.053170 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.053217 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.053999 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.054051 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" gracePeriod=600 Nov 23 15:45:23 crc kubenswrapper[4718]: E1123 15:45:23.173778 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.582463 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" exitCode=0 Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.582525 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757"} Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.582578 4718 scope.go:117] "RemoveContainer" containerID="74327bd72e539c965b0611b0786967503764bdb42e874d7079114964edd8c8fa" Nov 23 15:45:23 crc kubenswrapper[4718]: I1123 15:45:23.583324 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:45:23 crc kubenswrapper[4718]: E1123 15:45:23.583896 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:45:25 crc kubenswrapper[4718]: I1123 15:45:25.392799 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8xb95_6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e/control-plane-machine-set-operator/0.log" Nov 23 15:45:25 crc kubenswrapper[4718]: I1123 15:45:25.567938 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7vdf5_95464c4e-4616-4ab3-9928-4dc41beee4af/kube-rbac-proxy/0.log" Nov 23 15:45:25 crc kubenswrapper[4718]: I1123 15:45:25.573723 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7vdf5_95464c4e-4616-4ab3-9928-4dc41beee4af/machine-api-operator/0.log" Nov 23 15:45:37 crc kubenswrapper[4718]: I1123 15:45:37.945355 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-g6g5c_9c7e4ce6-8467-4656-9451-4ca2cf5f05e3/cert-manager-controller/0.log" Nov 23 15:45:38 crc kubenswrapper[4718]: I1123 15:45:38.078305 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-dlx5n_e6b032f0-b0b8-4db8-af64-ac70e535c9e7/cert-manager-cainjector/0.log" Nov 23 15:45:38 crc kubenswrapper[4718]: I1123 15:45:38.183898 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-wtn4p_006f97d3-c32d-4175-b6d1-41f25d854d69/cert-manager-webhook/0.log" Nov 23 15:45:38 crc kubenswrapper[4718]: I1123 15:45:38.441840 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:45:38 crc kubenswrapper[4718]: E1123 15:45:38.442154 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:45:50 crc kubenswrapper[4718]: I1123 15:45:50.539282 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-lvlqg_36e9d48f-9291-4ffd-8ca3-342d488e8bc2/nmstate-console-plugin/0.log" Nov 23 15:45:50 crc kubenswrapper[4718]: I1123 15:45:50.672495 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-z9n6m_a6e7f145-b630-4486-9a8d-e08d114c3f0a/nmstate-handler/0.log" Nov 23 15:45:50 crc kubenswrapper[4718]: I1123 15:45:50.734422 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-65tj8_955636b8-9879-4e63-a399-6ac037c1fcd5/kube-rbac-proxy/0.log" Nov 23 15:45:50 crc kubenswrapper[4718]: I1123 15:45:50.761412 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-65tj8_955636b8-9879-4e63-a399-6ac037c1fcd5/nmstate-metrics/0.log" Nov 23 15:45:50 crc kubenswrapper[4718]: I1123 15:45:50.920123 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-vj25n_62a579ff-5a89-4c20-afea-9419bd3bc1a0/nmstate-operator/0.log" Nov 23 15:45:50 crc kubenswrapper[4718]: I1123 15:45:50.984246 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-t5mgc_21067045-a7f2-4d80-a1fe-d2c15d3b7ee9/nmstate-webhook/0.log" Nov 23 15:45:51 crc kubenswrapper[4718]: I1123 15:45:51.440875 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:45:51 crc kubenswrapper[4718]: E1123 15:45:51.441273 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:46:01 crc kubenswrapper[4718]: I1123 15:46:01.651760 4718 scope.go:117] "RemoveContainer" containerID="2c9435f41e32628f6136745e3e8ede0d36aa1c4c5fc5e600b81656983549e9ec" Nov 23 15:46:03 crc kubenswrapper[4718]: I1123 15:46:03.441228 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:46:03 crc kubenswrapper[4718]: E1123 15:46:03.442575 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.244476 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-n4n8k_28328d3c-1a5b-45f9-a606-9f604db00a0a/kube-rbac-proxy/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.335596 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-n4n8k_28328d3c-1a5b-45f9-a606-9f604db00a0a/controller/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.388169 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.669983 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.693030 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.716413 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.725714 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.856327 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.896047 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.910409 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:46:04 crc kubenswrapper[4718]: I1123 15:46:04.934120 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.075969 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.082197 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.092191 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.118570 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/controller/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.254325 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/kube-rbac-proxy/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.261983 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/frr-metrics/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.340927 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/kube-rbac-proxy-frr/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.471750 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/reloader/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.574374 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-z66k9_5cd190ff-29e1-4d41-9687-57c554820cb4/frr-k8s-webhook-server/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.691654 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-768fb95d78-mp5lj_44722e76-1f31-46d4-b765-abd86f655b27/manager/0.log" Nov 23 15:46:05 crc kubenswrapper[4718]: I1123 15:46:05.925661 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bf7789474-dgc67_0e181755-dfbb-4608-b061-cbb0e95d6f95/webhook-server/0.log" Nov 23 15:46:06 crc kubenswrapper[4718]: I1123 15:46:06.014597 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-65gj6_e56ba209-dfa7-4f05-b9fd-e156af86cd9f/kube-rbac-proxy/0.log" Nov 23 15:46:06 crc kubenswrapper[4718]: I1123 15:46:06.469070 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-65gj6_e56ba209-dfa7-4f05-b9fd-e156af86cd9f/speaker/0.log" Nov 23 15:46:06 crc kubenswrapper[4718]: I1123 15:46:06.661555 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/frr/0.log" Nov 23 15:46:16 crc kubenswrapper[4718]: I1123 15:46:16.441118 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:46:16 crc kubenswrapper[4718]: E1123 15:46:16.441885 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:46:17 crc kubenswrapper[4718]: I1123 15:46:17.850063 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/util/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.043798 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/util/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.071642 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/pull/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.094659 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/pull/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.254988 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/pull/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.261865 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/util/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.274479 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/extract/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.432157 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-utilities/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.581654 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-content/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.621620 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-content/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.634861 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-utilities/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.779190 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-utilities/0.log" Nov 23 15:46:18 crc kubenswrapper[4718]: I1123 15:46:18.796924 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-content/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.022866 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-utilities/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.181259 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-utilities/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.263505 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-content/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.294236 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-content/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.298451 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/registry-server/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.522052 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-content/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.525720 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-utilities/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.743762 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/util/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.968543 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/pull/0.log" Nov 23 15:46:19 crc kubenswrapper[4718]: I1123 15:46:19.968579 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/pull/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.030179 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/util/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.166401 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/registry-server/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.197510 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/util/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.200974 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/pull/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.231776 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/extract/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.397969 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rdkgp_33a0a6b4-7aa0-4718-80f1-2d13fae9e761/marketplace-operator/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.470556 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-utilities/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.620913 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-utilities/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.643835 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-content/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.672361 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-content/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.837458 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-content/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.864049 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-utilities/0.log" Nov 23 15:46:20 crc kubenswrapper[4718]: I1123 15:46:20.986372 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/registry-server/0.log" Nov 23 15:46:21 crc kubenswrapper[4718]: I1123 15:46:21.070285 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-utilities/0.log" Nov 23 15:46:21 crc kubenswrapper[4718]: I1123 15:46:21.209297 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-utilities/0.log" Nov 23 15:46:21 crc kubenswrapper[4718]: I1123 15:46:21.262712 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-content/0.log" Nov 23 15:46:21 crc kubenswrapper[4718]: I1123 15:46:21.265523 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-content/0.log" Nov 23 15:46:21 crc kubenswrapper[4718]: I1123 15:46:21.509237 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-utilities/0.log" Nov 23 15:46:21 crc kubenswrapper[4718]: I1123 15:46:21.516622 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-content/0.log" Nov 23 15:46:21 crc kubenswrapper[4718]: I1123 15:46:21.888641 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/registry-server/0.log" Nov 23 15:46:31 crc kubenswrapper[4718]: I1123 15:46:31.441245 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:46:31 crc kubenswrapper[4718]: E1123 15:46:31.441836 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:46:46 crc kubenswrapper[4718]: I1123 15:46:46.441247 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:46:46 crc kubenswrapper[4718]: E1123 15:46:46.441987 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:46:59 crc kubenswrapper[4718]: I1123 15:46:59.441796 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:46:59 crc kubenswrapper[4718]: E1123 15:46:59.442868 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:47:10 crc kubenswrapper[4718]: I1123 15:47:10.447204 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:47:10 crc kubenswrapper[4718]: E1123 15:47:10.447752 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:47:21 crc kubenswrapper[4718]: I1123 15:47:21.441321 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:47:21 crc kubenswrapper[4718]: E1123 15:47:21.442185 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:47:35 crc kubenswrapper[4718]: I1123 15:47:35.442913 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:47:35 crc kubenswrapper[4718]: E1123 15:47:35.446170 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:47:47 crc kubenswrapper[4718]: I1123 15:47:47.441865 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:47:47 crc kubenswrapper[4718]: E1123 15:47:47.442623 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:48:02 crc kubenswrapper[4718]: I1123 15:48:02.055603 4718 generic.go:334] "Generic (PLEG): container finished" podID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerID="9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834" exitCode=0 Nov 23 15:48:02 crc kubenswrapper[4718]: I1123 15:48:02.055714 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" event={"ID":"473edc6e-9b5b-4f07-848d-67153e247ccf","Type":"ContainerDied","Data":"9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834"} Nov 23 15:48:02 crc kubenswrapper[4718]: I1123 15:48:02.056878 4718 scope.go:117] "RemoveContainer" containerID="9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834" Nov 23 15:48:02 crc kubenswrapper[4718]: I1123 15:48:02.443127 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:48:02 crc kubenswrapper[4718]: E1123 15:48:02.443408 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:48:02 crc kubenswrapper[4718]: I1123 15:48:02.729360 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xm2vt_must-gather-mdfh5_473edc6e-9b5b-4f07-848d-67153e247ccf/gather/0.log" Nov 23 15:48:10 crc kubenswrapper[4718]: I1123 15:48:10.542246 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xm2vt/must-gather-mdfh5"] Nov 23 15:48:10 crc kubenswrapper[4718]: I1123 15:48:10.543003 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerName="copy" containerID="cri-o://865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d" gracePeriod=2 Nov 23 15:48:10 crc kubenswrapper[4718]: I1123 15:48:10.555766 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xm2vt/must-gather-mdfh5"] Nov 23 15:48:10 crc kubenswrapper[4718]: I1123 15:48:10.980883 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xm2vt_must-gather-mdfh5_473edc6e-9b5b-4f07-848d-67153e247ccf/copy/0.log" Nov 23 15:48:10 crc kubenswrapper[4718]: I1123 15:48:10.981573 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.001184 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hssm\" (UniqueName: \"kubernetes.io/projected/473edc6e-9b5b-4f07-848d-67153e247ccf-kube-api-access-2hssm\") pod \"473edc6e-9b5b-4f07-848d-67153e247ccf\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.001303 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/473edc6e-9b5b-4f07-848d-67153e247ccf-must-gather-output\") pod \"473edc6e-9b5b-4f07-848d-67153e247ccf\" (UID: \"473edc6e-9b5b-4f07-848d-67153e247ccf\") " Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.007371 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473edc6e-9b5b-4f07-848d-67153e247ccf-kube-api-access-2hssm" (OuterVolumeSpecName: "kube-api-access-2hssm") pod "473edc6e-9b5b-4f07-848d-67153e247ccf" (UID: "473edc6e-9b5b-4f07-848d-67153e247ccf"). InnerVolumeSpecName "kube-api-access-2hssm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.103173 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hssm\" (UniqueName: \"kubernetes.io/projected/473edc6e-9b5b-4f07-848d-67153e247ccf-kube-api-access-2hssm\") on node \"crc\" DevicePath \"\"" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.137382 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/473edc6e-9b5b-4f07-848d-67153e247ccf-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "473edc6e-9b5b-4f07-848d-67153e247ccf" (UID: "473edc6e-9b5b-4f07-848d-67153e247ccf"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.160528 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xm2vt_must-gather-mdfh5_473edc6e-9b5b-4f07-848d-67153e247ccf/copy/0.log" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.160948 4718 generic.go:334] "Generic (PLEG): container finished" podID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerID="865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d" exitCode=143 Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.161044 4718 scope.go:117] "RemoveContainer" containerID="865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.161263 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xm2vt/must-gather-mdfh5" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.195127 4718 scope.go:117] "RemoveContainer" containerID="9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.204872 4718 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/473edc6e-9b5b-4f07-848d-67153e247ccf-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.246832 4718 scope.go:117] "RemoveContainer" containerID="865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d" Nov 23 15:48:11 crc kubenswrapper[4718]: E1123 15:48:11.248774 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d\": container with ID starting with 865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d not found: ID does not exist" containerID="865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.248872 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d"} err="failed to get container status \"865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d\": rpc error: code = NotFound desc = could not find container \"865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d\": container with ID starting with 865a0b5bce6c2c6f8e85e9ecd41e344156b3dc52a8f35d6a00908d0b0f6e389d not found: ID does not exist" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.249428 4718 scope.go:117] "RemoveContainer" containerID="9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834" Nov 23 15:48:11 crc kubenswrapper[4718]: E1123 15:48:11.249886 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834\": container with ID starting with 9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834 not found: ID does not exist" containerID="9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834" Nov 23 15:48:11 crc kubenswrapper[4718]: I1123 15:48:11.249908 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834"} err="failed to get container status \"9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834\": rpc error: code = NotFound desc = could not find container \"9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834\": container with ID starting with 9608cd475478ea4581336cb0ec096da2ed490f4e7f0510ada1a7e952fd4b4834 not found: ID does not exist" Nov 23 15:48:12 crc kubenswrapper[4718]: I1123 15:48:12.452414 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" path="/var/lib/kubelet/pods/473edc6e-9b5b-4f07-848d-67153e247ccf/volumes" Nov 23 15:48:14 crc kubenswrapper[4718]: I1123 15:48:14.441342 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:48:14 crc kubenswrapper[4718]: E1123 15:48:14.442052 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:48:28 crc kubenswrapper[4718]: I1123 15:48:28.441113 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:48:28 crc kubenswrapper[4718]: E1123 15:48:28.441994 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:48:43 crc kubenswrapper[4718]: I1123 15:48:43.442091 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:48:43 crc kubenswrapper[4718]: E1123 15:48:43.442949 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:48:55 crc kubenswrapper[4718]: I1123 15:48:55.440897 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:48:55 crc kubenswrapper[4718]: E1123 15:48:55.441520 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:49:10 crc kubenswrapper[4718]: I1123 15:49:10.448772 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:49:10 crc kubenswrapper[4718]: E1123 15:49:10.449619 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:49:24 crc kubenswrapper[4718]: I1123 15:49:24.441847 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:49:24 crc kubenswrapper[4718]: E1123 15:49:24.442684 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:49:39 crc kubenswrapper[4718]: I1123 15:49:39.441650 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:49:39 crc kubenswrapper[4718]: E1123 15:49:39.442404 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:49:53 crc kubenswrapper[4718]: I1123 15:49:53.443704 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:49:53 crc kubenswrapper[4718]: E1123 15:49:53.444837 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:50:01 crc kubenswrapper[4718]: I1123 15:50:01.790484 4718 scope.go:117] "RemoveContainer" containerID="0b59a02e4ab689f4f2dc2bcc519dbf5d477e77f556946d0d705fba2e4ced2eeb" Nov 23 15:50:08 crc kubenswrapper[4718]: I1123 15:50:08.441954 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:50:08 crc kubenswrapper[4718]: E1123 15:50:08.443259 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:50:19 crc kubenswrapper[4718]: I1123 15:50:19.441100 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:50:19 crc kubenswrapper[4718]: E1123 15:50:19.441911 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:50:31 crc kubenswrapper[4718]: I1123 15:50:31.441791 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:50:32 crc kubenswrapper[4718]: I1123 15:50:32.567036 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"8494e2961df6cbfe52cd4396ebd60e3b134bb680314d399effe7c36a101ad7dc"} Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.898931 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7qj2/must-gather-h69sp"] Nov 23 15:50:45 crc kubenswrapper[4718]: E1123 15:50:45.899877 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerName="gather" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.899890 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerName="gather" Nov 23 15:50:45 crc kubenswrapper[4718]: E1123 15:50:45.899914 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerName="copy" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.899920 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerName="copy" Nov 23 15:50:45 crc kubenswrapper[4718]: E1123 15:50:45.899941 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b76158-28c3-4d2a-a4ba-cd8baee34e3d" containerName="collect-profiles" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.899947 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b76158-28c3-4d2a-a4ba-cd8baee34e3d" containerName="collect-profiles" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.900158 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerName="copy" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.900176 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="473edc6e-9b5b-4f07-848d-67153e247ccf" containerName="gather" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.900188 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b76158-28c3-4d2a-a4ba-cd8baee34e3d" containerName="collect-profiles" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.901264 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.909537 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x7qj2"/"openshift-service-ca.crt" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.910003 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x7qj2"/"kube-root-ca.crt" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.935125 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x7qj2/must-gather-h69sp"] Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.972738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e17db7d9-eb38-46ab-ba44-befa4ad50685-must-gather-output\") pod \"must-gather-h69sp\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:45 crc kubenswrapper[4718]: I1123 15:50:45.972962 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp9jg\" (UniqueName: \"kubernetes.io/projected/e17db7d9-eb38-46ab-ba44-befa4ad50685-kube-api-access-hp9jg\") pod \"must-gather-h69sp\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:46 crc kubenswrapper[4718]: I1123 15:50:46.074886 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e17db7d9-eb38-46ab-ba44-befa4ad50685-must-gather-output\") pod \"must-gather-h69sp\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:46 crc kubenswrapper[4718]: I1123 15:50:46.075115 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp9jg\" (UniqueName: \"kubernetes.io/projected/e17db7d9-eb38-46ab-ba44-befa4ad50685-kube-api-access-hp9jg\") pod \"must-gather-h69sp\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:46 crc kubenswrapper[4718]: I1123 15:50:46.075504 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e17db7d9-eb38-46ab-ba44-befa4ad50685-must-gather-output\") pod \"must-gather-h69sp\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:46 crc kubenswrapper[4718]: I1123 15:50:46.096285 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp9jg\" (UniqueName: \"kubernetes.io/projected/e17db7d9-eb38-46ab-ba44-befa4ad50685-kube-api-access-hp9jg\") pod \"must-gather-h69sp\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:46 crc kubenswrapper[4718]: I1123 15:50:46.224946 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:50:46 crc kubenswrapper[4718]: I1123 15:50:46.675032 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x7qj2/must-gather-h69sp"] Nov 23 15:50:46 crc kubenswrapper[4718]: I1123 15:50:46.719416 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/must-gather-h69sp" event={"ID":"e17db7d9-eb38-46ab-ba44-befa4ad50685","Type":"ContainerStarted","Data":"79ad31fd3053d12ba66232bed27d252fcbf37a5cd11713e0a6c5af5a3d281efc"} Nov 23 15:50:47 crc kubenswrapper[4718]: I1123 15:50:47.738274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/must-gather-h69sp" event={"ID":"e17db7d9-eb38-46ab-ba44-befa4ad50685","Type":"ContainerStarted","Data":"23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba"} Nov 23 15:50:47 crc kubenswrapper[4718]: I1123 15:50:47.738726 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/must-gather-h69sp" event={"ID":"e17db7d9-eb38-46ab-ba44-befa4ad50685","Type":"ContainerStarted","Data":"12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c"} Nov 23 15:50:47 crc kubenswrapper[4718]: I1123 15:50:47.763559 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x7qj2/must-gather-h69sp" podStartSLOduration=2.763540921 podStartE2EDuration="2.763540921s" podCreationTimestamp="2025-11-23 15:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:50:47.755830594 +0000 UTC m=+3898.995450438" watchObservedRunningTime="2025-11-23 15:50:47.763540921 +0000 UTC m=+3899.003160765" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.786816 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-86bjq"] Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.788421 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.790856 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x7qj2"/"default-dockercfg-hfxxh" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.808780 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zldsk\" (UniqueName: \"kubernetes.io/projected/2a09c25d-c67c-4655-8c4d-742f016afcc6-kube-api-access-zldsk\") pod \"crc-debug-86bjq\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.808976 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a09c25d-c67c-4655-8c4d-742f016afcc6-host\") pod \"crc-debug-86bjq\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.910879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zldsk\" (UniqueName: \"kubernetes.io/projected/2a09c25d-c67c-4655-8c4d-742f016afcc6-kube-api-access-zldsk\") pod \"crc-debug-86bjq\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.911053 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a09c25d-c67c-4655-8c4d-742f016afcc6-host\") pod \"crc-debug-86bjq\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.911403 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a09c25d-c67c-4655-8c4d-742f016afcc6-host\") pod \"crc-debug-86bjq\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:50 crc kubenswrapper[4718]: I1123 15:50:50.934524 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zldsk\" (UniqueName: \"kubernetes.io/projected/2a09c25d-c67c-4655-8c4d-742f016afcc6-kube-api-access-zldsk\") pod \"crc-debug-86bjq\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:51 crc kubenswrapper[4718]: I1123 15:50:51.113738 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:50:51 crc kubenswrapper[4718]: I1123 15:50:51.779369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" event={"ID":"2a09c25d-c67c-4655-8c4d-742f016afcc6","Type":"ContainerStarted","Data":"7611833681e4ad2c0309c58d9c06ac9707f9210f891d1e22a5778c60c1c001be"} Nov 23 15:50:51 crc kubenswrapper[4718]: I1123 15:50:51.779705 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" event={"ID":"2a09c25d-c67c-4655-8c4d-742f016afcc6","Type":"ContainerStarted","Data":"3c77ceb057b0cc69994923a226a72da741aeb7ecd83fd73ebc7763c55e503cfe"} Nov 23 15:50:51 crc kubenswrapper[4718]: I1123 15:50:51.798222 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" podStartSLOduration=1.798202289 podStartE2EDuration="1.798202289s" podCreationTimestamp="2025-11-23 15:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:50:51.791581411 +0000 UTC m=+3903.031201255" watchObservedRunningTime="2025-11-23 15:50:51.798202289 +0000 UTC m=+3903.037822123" Nov 23 15:51:14 crc kubenswrapper[4718]: I1123 15:51:14.857175 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tpfkc"] Nov 23 15:51:14 crc kubenswrapper[4718]: I1123 15:51:14.863578 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:14 crc kubenswrapper[4718]: I1123 15:51:14.872769 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tpfkc"] Nov 23 15:51:14 crc kubenswrapper[4718]: I1123 15:51:14.939246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792lq\" (UniqueName: \"kubernetes.io/projected/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-kube-api-access-792lq\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:14 crc kubenswrapper[4718]: I1123 15:51:14.939366 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-catalog-content\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:14 crc kubenswrapper[4718]: I1123 15:51:14.939412 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-utilities\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.040788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-utilities\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.040913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-792lq\" (UniqueName: \"kubernetes.io/projected/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-kube-api-access-792lq\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.040991 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-catalog-content\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.041419 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-catalog-content\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.041464 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-utilities\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.059610 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-792lq\" (UniqueName: \"kubernetes.io/projected/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-kube-api-access-792lq\") pod \"certified-operators-tpfkc\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.186159 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.739731 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tpfkc"] Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.985943 4718 generic.go:334] "Generic (PLEG): container finished" podID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerID="1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e" exitCode=0 Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.986285 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpfkc" event={"ID":"48e399c0-f658-48d6-a0ff-9da0bd91ff6f","Type":"ContainerDied","Data":"1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e"} Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.986329 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpfkc" event={"ID":"48e399c0-f658-48d6-a0ff-9da0bd91ff6f","Type":"ContainerStarted","Data":"207db931dc0fcf1f48066672375d5b2b19fe1a75c61ef2e1b227d48efb368d2d"} Nov 23 15:51:15 crc kubenswrapper[4718]: I1123 15:51:15.988742 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 15:51:18 crc kubenswrapper[4718]: I1123 15:51:18.005635 4718 generic.go:334] "Generic (PLEG): container finished" podID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerID="040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae" exitCode=0 Nov 23 15:51:18 crc kubenswrapper[4718]: I1123 15:51:18.005702 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpfkc" event={"ID":"48e399c0-f658-48d6-a0ff-9da0bd91ff6f","Type":"ContainerDied","Data":"040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae"} Nov 23 15:51:19 crc kubenswrapper[4718]: I1123 15:51:19.022003 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpfkc" event={"ID":"48e399c0-f658-48d6-a0ff-9da0bd91ff6f","Type":"ContainerStarted","Data":"1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977"} Nov 23 15:51:19 crc kubenswrapper[4718]: I1123 15:51:19.049481 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tpfkc" podStartSLOduration=2.386826196 podStartE2EDuration="5.049462103s" podCreationTimestamp="2025-11-23 15:51:14 +0000 UTC" firstStartedPulling="2025-11-23 15:51:15.98838489 +0000 UTC m=+3927.228004744" lastFinishedPulling="2025-11-23 15:51:18.651020807 +0000 UTC m=+3929.890640651" observedRunningTime="2025-11-23 15:51:19.044846309 +0000 UTC m=+3930.284466153" watchObservedRunningTime="2025-11-23 15:51:19.049462103 +0000 UTC m=+3930.289081957" Nov 23 15:51:25 crc kubenswrapper[4718]: I1123 15:51:25.186537 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:25 crc kubenswrapper[4718]: I1123 15:51:25.188245 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:25 crc kubenswrapper[4718]: I1123 15:51:25.237357 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:25 crc kubenswrapper[4718]: I1123 15:51:25.448407 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:25 crc kubenswrapper[4718]: I1123 15:51:25.504462 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tpfkc"] Nov 23 15:51:27 crc kubenswrapper[4718]: I1123 15:51:27.412232 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tpfkc" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="registry-server" containerID="cri-o://1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977" gracePeriod=2 Nov 23 15:51:27 crc kubenswrapper[4718]: I1123 15:51:27.853235 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:27 crc kubenswrapper[4718]: I1123 15:51:27.954156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-utilities\") pod \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " Nov 23 15:51:27 crc kubenswrapper[4718]: I1123 15:51:27.954196 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-catalog-content\") pod \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " Nov 23 15:51:27 crc kubenswrapper[4718]: I1123 15:51:27.954226 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-792lq\" (UniqueName: \"kubernetes.io/projected/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-kube-api-access-792lq\") pod \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\" (UID: \"48e399c0-f658-48d6-a0ff-9da0bd91ff6f\") " Nov 23 15:51:27 crc kubenswrapper[4718]: I1123 15:51:27.955390 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-utilities" (OuterVolumeSpecName: "utilities") pod "48e399c0-f658-48d6-a0ff-9da0bd91ff6f" (UID: "48e399c0-f658-48d6-a0ff-9da0bd91ff6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:51:27 crc kubenswrapper[4718]: I1123 15:51:27.959851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-kube-api-access-792lq" (OuterVolumeSpecName: "kube-api-access-792lq") pod "48e399c0-f658-48d6-a0ff-9da0bd91ff6f" (UID: "48e399c0-f658-48d6-a0ff-9da0bd91ff6f"). InnerVolumeSpecName "kube-api-access-792lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.056564 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.056602 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-792lq\" (UniqueName: \"kubernetes.io/projected/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-kube-api-access-792lq\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.445899 4718 generic.go:334] "Generic (PLEG): container finished" podID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerID="1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977" exitCode=0 Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.446045 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpfkc" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.450887 4718 generic.go:334] "Generic (PLEG): container finished" podID="2a09c25d-c67c-4655-8c4d-742f016afcc6" containerID="7611833681e4ad2c0309c58d9c06ac9707f9210f891d1e22a5778c60c1c001be" exitCode=0 Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.461908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpfkc" event={"ID":"48e399c0-f658-48d6-a0ff-9da0bd91ff6f","Type":"ContainerDied","Data":"1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977"} Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.461997 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpfkc" event={"ID":"48e399c0-f658-48d6-a0ff-9da0bd91ff6f","Type":"ContainerDied","Data":"207db931dc0fcf1f48066672375d5b2b19fe1a75c61ef2e1b227d48efb368d2d"} Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.462012 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" event={"ID":"2a09c25d-c67c-4655-8c4d-742f016afcc6","Type":"ContainerDied","Data":"7611833681e4ad2c0309c58d9c06ac9707f9210f891d1e22a5778c60c1c001be"} Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.462054 4718 scope.go:117] "RemoveContainer" containerID="1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.486608 4718 scope.go:117] "RemoveContainer" containerID="040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.510828 4718 scope.go:117] "RemoveContainer" containerID="1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.558593 4718 scope.go:117] "RemoveContainer" containerID="1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977" Nov 23 15:51:28 crc kubenswrapper[4718]: E1123 15:51:28.559832 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977\": container with ID starting with 1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977 not found: ID does not exist" containerID="1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.559871 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977"} err="failed to get container status \"1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977\": rpc error: code = NotFound desc = could not find container \"1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977\": container with ID starting with 1a557c643ecce7f8a14ddc6c9d54f8339017a8d22128c183699d87e07605e977 not found: ID does not exist" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.559892 4718 scope.go:117] "RemoveContainer" containerID="040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae" Nov 23 15:51:28 crc kubenswrapper[4718]: E1123 15:51:28.560257 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae\": container with ID starting with 040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae not found: ID does not exist" containerID="040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.560295 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae"} err="failed to get container status \"040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae\": rpc error: code = NotFound desc = could not find container \"040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae\": container with ID starting with 040a3db93967729218012a4cf21f92a1636ad878ae437ea4f6d43cf82a92c0ae not found: ID does not exist" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.560322 4718 scope.go:117] "RemoveContainer" containerID="1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e" Nov 23 15:51:28 crc kubenswrapper[4718]: E1123 15:51:28.560625 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e\": container with ID starting with 1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e not found: ID does not exist" containerID="1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.560659 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e"} err="failed to get container status \"1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e\": rpc error: code = NotFound desc = could not find container \"1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e\": container with ID starting with 1e1b8a1dadda174a853d80e91e3bc8ab0c0b7b35aed822d1f54affb2471d135e not found: ID does not exist" Nov 23 15:51:28 crc kubenswrapper[4718]: I1123 15:51:28.986308 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48e399c0-f658-48d6-a0ff-9da0bd91ff6f" (UID: "48e399c0-f658-48d6-a0ff-9da0bd91ff6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.077264 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e399c0-f658-48d6-a0ff-9da0bd91ff6f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.084317 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tpfkc"] Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.095013 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tpfkc"] Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.571289 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.613913 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-86bjq"] Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.625779 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-86bjq"] Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.688359 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a09c25d-c67c-4655-8c4d-742f016afcc6-host\") pod \"2a09c25d-c67c-4655-8c4d-742f016afcc6\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.688408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zldsk\" (UniqueName: \"kubernetes.io/projected/2a09c25d-c67c-4655-8c4d-742f016afcc6-kube-api-access-zldsk\") pod \"2a09c25d-c67c-4655-8c4d-742f016afcc6\" (UID: \"2a09c25d-c67c-4655-8c4d-742f016afcc6\") " Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.688491 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a09c25d-c67c-4655-8c4d-742f016afcc6-host" (OuterVolumeSpecName: "host") pod "2a09c25d-c67c-4655-8c4d-742f016afcc6" (UID: "2a09c25d-c67c-4655-8c4d-742f016afcc6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.688932 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a09c25d-c67c-4655-8c4d-742f016afcc6-host\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.700718 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a09c25d-c67c-4655-8c4d-742f016afcc6-kube-api-access-zldsk" (OuterVolumeSpecName: "kube-api-access-zldsk") pod "2a09c25d-c67c-4655-8c4d-742f016afcc6" (UID: "2a09c25d-c67c-4655-8c4d-742f016afcc6"). InnerVolumeSpecName "kube-api-access-zldsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:51:29 crc kubenswrapper[4718]: I1123 15:51:29.791334 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zldsk\" (UniqueName: \"kubernetes.io/projected/2a09c25d-c67c-4655-8c4d-742f016afcc6-kube-api-access-zldsk\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.451640 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a09c25d-c67c-4655-8c4d-742f016afcc6" path="/var/lib/kubelet/pods/2a09c25d-c67c-4655-8c4d-742f016afcc6/volumes" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.452259 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" path="/var/lib/kubelet/pods/48e399c0-f658-48d6-a0ff-9da0bd91ff6f/volumes" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.473948 4718 scope.go:117] "RemoveContainer" containerID="7611833681e4ad2c0309c58d9c06ac9707f9210f891d1e22a5778c60c1c001be" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.474016 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-86bjq" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.795833 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-wgh4f"] Nov 23 15:51:30 crc kubenswrapper[4718]: E1123 15:51:30.796238 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="extract-content" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.796251 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="extract-content" Nov 23 15:51:30 crc kubenswrapper[4718]: E1123 15:51:30.796262 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="registry-server" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.796268 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="registry-server" Nov 23 15:51:30 crc kubenswrapper[4718]: E1123 15:51:30.796289 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="extract-utilities" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.796296 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="extract-utilities" Nov 23 15:51:30 crc kubenswrapper[4718]: E1123 15:51:30.796316 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a09c25d-c67c-4655-8c4d-742f016afcc6" containerName="container-00" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.796322 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a09c25d-c67c-4655-8c4d-742f016afcc6" containerName="container-00" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.796511 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a09c25d-c67c-4655-8c4d-742f016afcc6" containerName="container-00" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.796525 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e399c0-f658-48d6-a0ff-9da0bd91ff6f" containerName="registry-server" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.797240 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.799009 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x7qj2"/"default-dockercfg-hfxxh" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.912528 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d653b5-070f-4c8c-b446-9849379e53c0-host\") pod \"crc-debug-wgh4f\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:30 crc kubenswrapper[4718]: I1123 15:51:30.912651 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slb8l\" (UniqueName: \"kubernetes.io/projected/91d653b5-070f-4c8c-b446-9849379e53c0-kube-api-access-slb8l\") pod \"crc-debug-wgh4f\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.014815 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slb8l\" (UniqueName: \"kubernetes.io/projected/91d653b5-070f-4c8c-b446-9849379e53c0-kube-api-access-slb8l\") pod \"crc-debug-wgh4f\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.014969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d653b5-070f-4c8c-b446-9849379e53c0-host\") pod \"crc-debug-wgh4f\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.015078 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d653b5-070f-4c8c-b446-9849379e53c0-host\") pod \"crc-debug-wgh4f\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.042589 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slb8l\" (UniqueName: \"kubernetes.io/projected/91d653b5-070f-4c8c-b446-9849379e53c0-kube-api-access-slb8l\") pod \"crc-debug-wgh4f\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.117400 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.484035 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" event={"ID":"91d653b5-070f-4c8c-b446-9849379e53c0","Type":"ContainerStarted","Data":"a0c67f7916444068d685bd0fb758fdabf93f103a3b38513d37d1e09aaa8c2806"} Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.484358 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" event={"ID":"91d653b5-070f-4c8c-b446-9849379e53c0","Type":"ContainerStarted","Data":"09d127aa4026dd6b466221926bc0aade08f816ec0e8f1ca725eb6cdb86c7cf5a"} Nov 23 15:51:31 crc kubenswrapper[4718]: I1123 15:51:31.501036 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" podStartSLOduration=1.501016916 podStartE2EDuration="1.501016916s" podCreationTimestamp="2025-11-23 15:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 15:51:31.49854948 +0000 UTC m=+3942.738169324" watchObservedRunningTime="2025-11-23 15:51:31.501016916 +0000 UTC m=+3942.740636760" Nov 23 15:51:32 crc kubenswrapper[4718]: I1123 15:51:32.504957 4718 generic.go:334] "Generic (PLEG): container finished" podID="91d653b5-070f-4c8c-b446-9849379e53c0" containerID="a0c67f7916444068d685bd0fb758fdabf93f103a3b38513d37d1e09aaa8c2806" exitCode=0 Nov 23 15:51:32 crc kubenswrapper[4718]: I1123 15:51:32.505025 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" event={"ID":"91d653b5-070f-4c8c-b446-9849379e53c0","Type":"ContainerDied","Data":"a0c67f7916444068d685bd0fb758fdabf93f103a3b38513d37d1e09aaa8c2806"} Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.616802 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.646288 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-wgh4f"] Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.653349 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-wgh4f"] Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.763299 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slb8l\" (UniqueName: \"kubernetes.io/projected/91d653b5-070f-4c8c-b446-9849379e53c0-kube-api-access-slb8l\") pod \"91d653b5-070f-4c8c-b446-9849379e53c0\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.763394 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d653b5-070f-4c8c-b446-9849379e53c0-host\") pod \"91d653b5-070f-4c8c-b446-9849379e53c0\" (UID: \"91d653b5-070f-4c8c-b446-9849379e53c0\") " Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.763848 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91d653b5-070f-4c8c-b446-9849379e53c0-host" (OuterVolumeSpecName: "host") pod "91d653b5-070f-4c8c-b446-9849379e53c0" (UID: "91d653b5-070f-4c8c-b446-9849379e53c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.768593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d653b5-070f-4c8c-b446-9849379e53c0-kube-api-access-slb8l" (OuterVolumeSpecName: "kube-api-access-slb8l") pod "91d653b5-070f-4c8c-b446-9849379e53c0" (UID: "91d653b5-070f-4c8c-b446-9849379e53c0"). InnerVolumeSpecName "kube-api-access-slb8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.865371 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slb8l\" (UniqueName: \"kubernetes.io/projected/91d653b5-070f-4c8c-b446-9849379e53c0-kube-api-access-slb8l\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:33 crc kubenswrapper[4718]: I1123 15:51:33.865413 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d653b5-070f-4c8c-b446-9849379e53c0-host\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.453501 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d653b5-070f-4c8c-b446-9849379e53c0" path="/var/lib/kubelet/pods/91d653b5-070f-4c8c-b446-9849379e53c0/volumes" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.521633 4718 scope.go:117] "RemoveContainer" containerID="a0c67f7916444068d685bd0fb758fdabf93f103a3b38513d37d1e09aaa8c2806" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.521747 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-wgh4f" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.822610 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-66swq"] Nov 23 15:51:34 crc kubenswrapper[4718]: E1123 15:51:34.823256 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d653b5-070f-4c8c-b446-9849379e53c0" containerName="container-00" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.823269 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d653b5-070f-4c8c-b446-9849379e53c0" containerName="container-00" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.823499 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d653b5-070f-4c8c-b446-9849379e53c0" containerName="container-00" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.824136 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.830870 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x7qj2"/"default-dockercfg-hfxxh" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.983747 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e733e024-9d77-4ec6-ba06-96cf418c5146-host\") pod \"crc-debug-66swq\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:34 crc kubenswrapper[4718]: I1123 15:51:34.983846 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jm7\" (UniqueName: \"kubernetes.io/projected/e733e024-9d77-4ec6-ba06-96cf418c5146-kube-api-access-98jm7\") pod \"crc-debug-66swq\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.085737 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e733e024-9d77-4ec6-ba06-96cf418c5146-host\") pod \"crc-debug-66swq\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.085825 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jm7\" (UniqueName: \"kubernetes.io/projected/e733e024-9d77-4ec6-ba06-96cf418c5146-kube-api-access-98jm7\") pod \"crc-debug-66swq\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.085903 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e733e024-9d77-4ec6-ba06-96cf418c5146-host\") pod \"crc-debug-66swq\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.104057 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jm7\" (UniqueName: \"kubernetes.io/projected/e733e024-9d77-4ec6-ba06-96cf418c5146-kube-api-access-98jm7\") pod \"crc-debug-66swq\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.142872 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:35 crc kubenswrapper[4718]: W1123 15:51:35.168118 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode733e024_9d77_4ec6_ba06_96cf418c5146.slice/crio-6e8f987daecce6dc3eb230c4da0ceb8815be1e9275f5470704a988a7e9b170fb WatchSource:0}: Error finding container 6e8f987daecce6dc3eb230c4da0ceb8815be1e9275f5470704a988a7e9b170fb: Status 404 returned error can't find the container with id 6e8f987daecce6dc3eb230c4da0ceb8815be1e9275f5470704a988a7e9b170fb Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.169060 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-77c6b"] Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.172278 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.189176 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77c6b"] Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.289339 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-catalog-content\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.289840 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqst\" (UniqueName: \"kubernetes.io/projected/8f5c44d3-332f-47d8-8297-179f845791e0-kube-api-access-xlqst\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.289926 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-utilities\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.391517 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-utilities\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.391861 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-catalog-content\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.391965 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-utilities\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.391977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlqst\" (UniqueName: \"kubernetes.io/projected/8f5c44d3-332f-47d8-8297-179f845791e0-kube-api-access-xlqst\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.392461 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-catalog-content\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.411858 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlqst\" (UniqueName: \"kubernetes.io/projected/8f5c44d3-332f-47d8-8297-179f845791e0-kube-api-access-xlqst\") pod \"redhat-marketplace-77c6b\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.529219 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-66swq" event={"ID":"e733e024-9d77-4ec6-ba06-96cf418c5146","Type":"ContainerStarted","Data":"6e8f987daecce6dc3eb230c4da0ceb8815be1e9275f5470704a988a7e9b170fb"} Nov 23 15:51:35 crc kubenswrapper[4718]: I1123 15:51:35.590173 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:36 crc kubenswrapper[4718]: I1123 15:51:36.103754 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77c6b"] Nov 23 15:51:36 crc kubenswrapper[4718]: I1123 15:51:36.540908 4718 generic.go:334] "Generic (PLEG): container finished" podID="e733e024-9d77-4ec6-ba06-96cf418c5146" containerID="b8abf70557dc4837a7c037c1bb32ad3b917d8bef9a5e3dafd61228cf2886e1d6" exitCode=0 Nov 23 15:51:36 crc kubenswrapper[4718]: I1123 15:51:36.541002 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/crc-debug-66swq" event={"ID":"e733e024-9d77-4ec6-ba06-96cf418c5146","Type":"ContainerDied","Data":"b8abf70557dc4837a7c037c1bb32ad3b917d8bef9a5e3dafd61228cf2886e1d6"} Nov 23 15:51:36 crc kubenswrapper[4718]: I1123 15:51:36.542924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77c6b" event={"ID":"8f5c44d3-332f-47d8-8297-179f845791e0","Type":"ContainerStarted","Data":"70019eaac6c19113160b5281743b4011b6764afa9e082fbde4a44d23ac273807"} Nov 23 15:51:37 crc kubenswrapper[4718]: I1123 15:51:37.554026 4718 generic.go:334] "Generic (PLEG): container finished" podID="8f5c44d3-332f-47d8-8297-179f845791e0" containerID="6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249" exitCode=0 Nov 23 15:51:37 crc kubenswrapper[4718]: I1123 15:51:37.554122 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77c6b" event={"ID":"8f5c44d3-332f-47d8-8297-179f845791e0","Type":"ContainerDied","Data":"6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249"} Nov 23 15:51:37 crc kubenswrapper[4718]: I1123 15:51:37.614388 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-66swq"] Nov 23 15:51:37 crc kubenswrapper[4718]: I1123 15:51:37.626107 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7qj2/crc-debug-66swq"] Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.569217 4718 generic.go:334] "Generic (PLEG): container finished" podID="8f5c44d3-332f-47d8-8297-179f845791e0" containerID="60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467" exitCode=0 Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.569294 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77c6b" event={"ID":"8f5c44d3-332f-47d8-8297-179f845791e0","Type":"ContainerDied","Data":"60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467"} Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.670258 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.754771 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jm7\" (UniqueName: \"kubernetes.io/projected/e733e024-9d77-4ec6-ba06-96cf418c5146-kube-api-access-98jm7\") pod \"e733e024-9d77-4ec6-ba06-96cf418c5146\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.754895 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e733e024-9d77-4ec6-ba06-96cf418c5146-host\") pod \"e733e024-9d77-4ec6-ba06-96cf418c5146\" (UID: \"e733e024-9d77-4ec6-ba06-96cf418c5146\") " Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.755022 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e733e024-9d77-4ec6-ba06-96cf418c5146-host" (OuterVolumeSpecName: "host") pod "e733e024-9d77-4ec6-ba06-96cf418c5146" (UID: "e733e024-9d77-4ec6-ba06-96cf418c5146"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.755417 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e733e024-9d77-4ec6-ba06-96cf418c5146-host\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.761207 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e733e024-9d77-4ec6-ba06-96cf418c5146-kube-api-access-98jm7" (OuterVolumeSpecName: "kube-api-access-98jm7") pod "e733e024-9d77-4ec6-ba06-96cf418c5146" (UID: "e733e024-9d77-4ec6-ba06-96cf418c5146"). InnerVolumeSpecName "kube-api-access-98jm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:51:38 crc kubenswrapper[4718]: I1123 15:51:38.857307 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jm7\" (UniqueName: \"kubernetes.io/projected/e733e024-9d77-4ec6-ba06-96cf418c5146-kube-api-access-98jm7\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:39 crc kubenswrapper[4718]: I1123 15:51:39.581749 4718 scope.go:117] "RemoveContainer" containerID="b8abf70557dc4837a7c037c1bb32ad3b917d8bef9a5e3dafd61228cf2886e1d6" Nov 23 15:51:39 crc kubenswrapper[4718]: I1123 15:51:39.581852 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/crc-debug-66swq" Nov 23 15:51:40 crc kubenswrapper[4718]: I1123 15:51:40.455611 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e733e024-9d77-4ec6-ba06-96cf418c5146" path="/var/lib/kubelet/pods/e733e024-9d77-4ec6-ba06-96cf418c5146/volumes" Nov 23 15:51:40 crc kubenswrapper[4718]: I1123 15:51:40.594511 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77c6b" event={"ID":"8f5c44d3-332f-47d8-8297-179f845791e0","Type":"ContainerStarted","Data":"e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281"} Nov 23 15:51:40 crc kubenswrapper[4718]: I1123 15:51:40.620008 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-77c6b" podStartSLOduration=3.542205373 podStartE2EDuration="5.619981099s" podCreationTimestamp="2025-11-23 15:51:35 +0000 UTC" firstStartedPulling="2025-11-23 15:51:37.556390309 +0000 UTC m=+3948.796010153" lastFinishedPulling="2025-11-23 15:51:39.634166035 +0000 UTC m=+3950.873785879" observedRunningTime="2025-11-23 15:51:40.611401598 +0000 UTC m=+3951.851021442" watchObservedRunningTime="2025-11-23 15:51:40.619981099 +0000 UTC m=+3951.859600943" Nov 23 15:51:45 crc kubenswrapper[4718]: I1123 15:51:45.591707 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:45 crc kubenswrapper[4718]: I1123 15:51:45.591983 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:45 crc kubenswrapper[4718]: I1123 15:51:45.643832 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:45 crc kubenswrapper[4718]: I1123 15:51:45.703131 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:46 crc kubenswrapper[4718]: I1123 15:51:46.066372 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77c6b"] Nov 23 15:51:47 crc kubenswrapper[4718]: I1123 15:51:47.651226 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-77c6b" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="registry-server" containerID="cri-o://e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281" gracePeriod=2 Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.087273 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.132514 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-catalog-content\") pod \"8f5c44d3-332f-47d8-8297-179f845791e0\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.132715 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlqst\" (UniqueName: \"kubernetes.io/projected/8f5c44d3-332f-47d8-8297-179f845791e0-kube-api-access-xlqst\") pod \"8f5c44d3-332f-47d8-8297-179f845791e0\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.132826 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-utilities\") pod \"8f5c44d3-332f-47d8-8297-179f845791e0\" (UID: \"8f5c44d3-332f-47d8-8297-179f845791e0\") " Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.134317 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-utilities" (OuterVolumeSpecName: "utilities") pod "8f5c44d3-332f-47d8-8297-179f845791e0" (UID: "8f5c44d3-332f-47d8-8297-179f845791e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.140421 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5c44d3-332f-47d8-8297-179f845791e0-kube-api-access-xlqst" (OuterVolumeSpecName: "kube-api-access-xlqst") pod "8f5c44d3-332f-47d8-8297-179f845791e0" (UID: "8f5c44d3-332f-47d8-8297-179f845791e0"). InnerVolumeSpecName "kube-api-access-xlqst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.157904 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f5c44d3-332f-47d8-8297-179f845791e0" (UID: "8f5c44d3-332f-47d8-8297-179f845791e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.236218 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlqst\" (UniqueName: \"kubernetes.io/projected/8f5c44d3-332f-47d8-8297-179f845791e0-kube-api-access-xlqst\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.236274 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.236284 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5c44d3-332f-47d8-8297-179f845791e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.677279 4718 generic.go:334] "Generic (PLEG): container finished" podID="8f5c44d3-332f-47d8-8297-179f845791e0" containerID="e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281" exitCode=0 Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.677334 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77c6b" event={"ID":"8f5c44d3-332f-47d8-8297-179f845791e0","Type":"ContainerDied","Data":"e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281"} Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.677371 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77c6b" event={"ID":"8f5c44d3-332f-47d8-8297-179f845791e0","Type":"ContainerDied","Data":"70019eaac6c19113160b5281743b4011b6764afa9e082fbde4a44d23ac273807"} Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.677556 4718 scope.go:117] "RemoveContainer" containerID="e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.677618 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77c6b" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.705351 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77c6b"] Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.715681 4718 scope.go:117] "RemoveContainer" containerID="60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.720661 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-77c6b"] Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.816503 4718 scope.go:117] "RemoveContainer" containerID="6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.865504 4718 scope.go:117] "RemoveContainer" containerID="e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281" Nov 23 15:51:48 crc kubenswrapper[4718]: E1123 15:51:48.866133 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281\": container with ID starting with e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281 not found: ID does not exist" containerID="e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.866179 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281"} err="failed to get container status \"e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281\": rpc error: code = NotFound desc = could not find container \"e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281\": container with ID starting with e3f698b2cfd644139cbb1417bdbe322fcd43eb7d27f9aa5d239addfa72be0281 not found: ID does not exist" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.866208 4718 scope.go:117] "RemoveContainer" containerID="60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467" Nov 23 15:51:48 crc kubenswrapper[4718]: E1123 15:51:48.866647 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467\": container with ID starting with 60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467 not found: ID does not exist" containerID="60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.866670 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467"} err="failed to get container status \"60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467\": rpc error: code = NotFound desc = could not find container \"60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467\": container with ID starting with 60ba550662667b64999cc2354c44803ba3ca8307956f9d2d0f574635f5757467 not found: ID does not exist" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.866682 4718 scope.go:117] "RemoveContainer" containerID="6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249" Nov 23 15:51:48 crc kubenswrapper[4718]: E1123 15:51:48.867041 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249\": container with ID starting with 6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249 not found: ID does not exist" containerID="6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249" Nov 23 15:51:48 crc kubenswrapper[4718]: I1123 15:51:48.867067 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249"} err="failed to get container status \"6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249\": rpc error: code = NotFound desc = could not find container \"6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249\": container with ID starting with 6c0ea890a90e487812a0832e76fa4ab57c0ce33191702ef0ffa6ef74e188a249 not found: ID does not exist" Nov 23 15:51:50 crc kubenswrapper[4718]: I1123 15:51:50.455854 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" path="/var/lib/kubelet/pods/8f5c44d3-332f-47d8-8297-179f845791e0/volumes" Nov 23 15:51:59 crc kubenswrapper[4718]: I1123 15:51:59.005455 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c6d99969d-lw2d7_cdc09b60-8801-4296-98e6-94a2e5ac8697/barbican-api/0.log" Nov 23 15:51:59 crc kubenswrapper[4718]: I1123 15:51:59.127412 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c6d99969d-lw2d7_cdc09b60-8801-4296-98e6-94a2e5ac8697/barbican-api-log/0.log" Nov 23 15:51:59 crc kubenswrapper[4718]: I1123 15:51:59.185507 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74dd8678b6-fs5mq_effa4eb1-dc32-4d96-8f19-0eaae852f9a1/barbican-keystone-listener/0.log" Nov 23 15:51:59 crc kubenswrapper[4718]: I1123 15:51:59.266341 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74dd8678b6-fs5mq_effa4eb1-dc32-4d96-8f19-0eaae852f9a1/barbican-keystone-listener-log/0.log" Nov 23 15:51:59 crc kubenswrapper[4718]: I1123 15:51:59.880985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-95d8d74d7-xhwwk_4fcdc6b9-b082-4a9d-b905-7978d941b38f/barbican-worker-log/0.log" Nov 23 15:51:59 crc kubenswrapper[4718]: I1123 15:51:59.894383 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-95d8d74d7-xhwwk_4fcdc6b9-b082-4a9d-b905-7978d941b38f/barbican-worker/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.099378 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4wk8z_ba8b5a43-6edd-4e8c-bafe-e6e0434edb6c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.176828 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/ceilometer-central-agent/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.178763 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/ceilometer-notification-agent/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.293532 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/sg-core/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.302212 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_786c1d0e-1895-4b3f-a95e-537692e9685d/proxy-httpd/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.406072 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3a35d79c-43f8-4fbb-822d-d4b42a332068/cinder-api/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.526058 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3a35d79c-43f8-4fbb-822d-d4b42a332068/cinder-api-log/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.630388 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1ac027eb-4d7f-4d21-8689-9ed48cd5b35b/probe/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.639321 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1ac027eb-4d7f-4d21-8689-9ed48cd5b35b/cinder-scheduler/0.log" Nov 23 15:52:00 crc kubenswrapper[4718]: I1123 15:52:00.942776 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-j5s4b_843c6972-d172-42d0-9c7c-bacd49de3307/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.003258 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2kfl2_4785d6c3-899e-4bfd-9333-b4493a1cae09/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.146003 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-44tvh_680f2aea-fad7-47b3-aabb-06c149297a03/init/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.327833 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-44tvh_680f2aea-fad7-47b3-aabb-06c149297a03/init/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.365281 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-84twb_253a44f7-f768-49ca-88c1-de87b9cbcbbb/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.392751 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-44tvh_680f2aea-fad7-47b3-aabb-06c149297a03/dnsmasq-dns/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.557380 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34d380e0-2ed9-45ce-9c05-85b138e3a99a/glance-httpd/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.604397 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_34d380e0-2ed9-45ce-9c05-85b138e3a99a/glance-log/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.754569 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_09c60904-5047-4206-97a2-57b5c85a22d5/glance-log/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.762054 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_09c60904-5047-4206-97a2-57b5c85a22d5/glance-httpd/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.875238 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65c9478d8d-nxsfl_ebb73efe-fe18-4507-b723-d3dbf1d8ed91/horizon/0.log" Nov 23 15:52:01 crc kubenswrapper[4718]: I1123 15:52:01.961247 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dxbcx_e04ec17a-ea46-47b8-ac60-1e2a22849b63/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.150575 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-z276b_6bc22e89-875e-4f3a-93b3-5e738e897c23/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.226699 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65c9478d8d-nxsfl_ebb73efe-fe18-4507-b723-d3dbf1d8ed91/horizon-log/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.414026 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_34c11acb-21ce-4e87-baab-f6f765d508cf/kube-state-metrics/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.421282 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-844cdbd5f8-ptmjk_e7d1c35f-1d40-4385-9579-dc7477cc104d/keystone-api/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.507735 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ctvrj_e50f1c92-4d4a-4a83-bf46-c8268c34d373/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.845396 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6566df567c-72brl_14545901-b770-4d66-8692-51937e97d24a/neutron-httpd/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.848991 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6566df567c-72brl_14545901-b770-4d66-8692-51937e97d24a/neutron-api/0.log" Nov 23 15:52:02 crc kubenswrapper[4718]: I1123 15:52:02.950131 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4tjzw_a7538df6-0093-40e0-b2da-59c2273f1f0f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:03 crc kubenswrapper[4718]: I1123 15:52:03.417136 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1bc63d6b-fef8-4086-bdf2-56e1ecb469bd/nova-api-log/0.log" Nov 23 15:52:03 crc kubenswrapper[4718]: I1123 15:52:03.633127 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b70088cf-3265-44f6-b723-5c5317dd1f54/nova-cell0-conductor-conductor/0.log" Nov 23 15:52:03 crc kubenswrapper[4718]: I1123 15:52:03.801026 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_427b4910-7814-4a43-8b0a-b91b05aed240/nova-cell1-conductor-conductor/0.log" Nov 23 15:52:03 crc kubenswrapper[4718]: I1123 15:52:03.879613 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1bc63d6b-fef8-4086-bdf2-56e1ecb469bd/nova-api-api/0.log" Nov 23 15:52:03 crc kubenswrapper[4718]: I1123 15:52:03.978193 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f5538cf1-7653-415e-8b15-851291e281f1/nova-cell1-novncproxy-novncproxy/0.log" Nov 23 15:52:04 crc kubenswrapper[4718]: I1123 15:52:04.047676 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-hl7dp_b4aa1c8b-75a3-4a8f-98e1-25456c27560f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:04 crc kubenswrapper[4718]: I1123 15:52:04.230333 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8c3e138f-87e1-4b20-8fba-0fa931f9e09e/nova-metadata-log/0.log" Nov 23 15:52:04 crc kubenswrapper[4718]: I1123 15:52:04.504083 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_da1e3b17-14ea-456e-a694-073e8fd4edaf/mysql-bootstrap/0.log" Nov 23 15:52:04 crc kubenswrapper[4718]: I1123 15:52:04.604959 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8446752b-4a28-452c-8df8-6ac8558b7754/nova-scheduler-scheduler/0.log" Nov 23 15:52:04 crc kubenswrapper[4718]: I1123 15:52:04.717712 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_da1e3b17-14ea-456e-a694-073e8fd4edaf/mysql-bootstrap/0.log" Nov 23 15:52:04 crc kubenswrapper[4718]: I1123 15:52:04.720684 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_da1e3b17-14ea-456e-a694-073e8fd4edaf/galera/0.log" Nov 23 15:52:04 crc kubenswrapper[4718]: I1123 15:52:04.908166 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_adbd2274-f81b-4930-85c1-eec8a7a3790d/mysql-bootstrap/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.086619 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_adbd2274-f81b-4930-85c1-eec8a7a3790d/mysql-bootstrap/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.096112 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_adbd2274-f81b-4930-85c1-eec8a7a3790d/galera/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.303777 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a2720257-17da-4635-bd8c-2d65b9e8b9f0/openstackclient/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.392764 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dc8hm_65b3425b-bedb-4274-a600-091b1910a2d7/ovn-controller/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.561087 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jfjhx_d8ae0875-d71a-40f8-8db0-6af6b7acd60f/openstack-network-exporter/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.728277 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8c3e138f-87e1-4b20-8fba-0fa931f9e09e/nova-metadata-metadata/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.744546 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovsdb-server-init/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.902566 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovs-vswitchd/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.923749 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovsdb-server-init/0.log" Nov 23 15:52:05 crc kubenswrapper[4718]: I1123 15:52:05.976733 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-86mpq_e6e54f9e-4a86-41d3-9723-9455c682fddc/ovsdb-server/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.170294 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-b7ghx_cea34a47-7094-4217-8086-0c4d9ec9d23f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.203828 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f1696a6b-d5a7-403f-b9d0-168c0e42a937/openstack-network-exporter/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.224422 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f1696a6b-d5a7-403f-b9d0-168c0e42a937/ovn-northd/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.387945 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0d21970-a68c-4d2b-bbcb-18ae83284d95/openstack-network-exporter/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.455083 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0d21970-a68c-4d2b-bbcb-18ae83284d95/ovsdbserver-nb/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.569279 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_33d2daa7-22e5-4713-9cc1-3d976c1559e3/openstack-network-exporter/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.593759 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_33d2daa7-22e5-4713-9cc1-3d976c1559e3/ovsdbserver-sb/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.725653 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7fdf4df4d-qlcjn_af2372a3-1e0c-4668-ab25-cfb8616a6d1b/placement-api/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.871736 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7fdf4df4d-qlcjn_af2372a3-1e0c-4668-ab25-cfb8616a6d1b/placement-log/0.log" Nov 23 15:52:06 crc kubenswrapper[4718]: I1123 15:52:06.898379 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_133d2692-e5ce-4298-89d3-6fc11ab5f0b3/setup-container/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.087207 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_133d2692-e5ce-4298-89d3-6fc11ab5f0b3/setup-container/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.097627 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_133d2692-e5ce-4298-89d3-6fc11ab5f0b3/rabbitmq/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.138694 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6cd89fb4-b66f-4df5-940d-fe185bd5e039/setup-container/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.316696 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6cd89fb4-b66f-4df5-940d-fe185bd5e039/setup-container/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.413572 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6cd89fb4-b66f-4df5-940d-fe185bd5e039/rabbitmq/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.418268 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vnlbr_ce6b5147-989d-4b02-987b-1b6b3c4d9460/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.624905 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rnd7h_834d3c5a-a503-42a6-a71d-8e00fe358ec6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:07 crc kubenswrapper[4718]: I1123 15:52:07.627604 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-km6kx_74d2f450-1dcb-42ee-9b6f-da7389ddc9ce/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.253095 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-lsf9c_e84dd6f8-ea71-48e6-ae5f-d8f6a2af9485/ssh-known-hosts-edpm-deployment/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.266607 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4ngvk_a4cabb6a-7840-4f18-af7f-03bfffcb4c11/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.556036 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-589b8777c9-j8mvv_62536478-1337-4bad-b5e3-77cf6dd4d54b/proxy-server/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.582867 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-589b8777c9-j8mvv_62536478-1337-4bad-b5e3-77cf6dd4d54b/proxy-httpd/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.696936 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sx654_79cae030-375c-4b0e-9dfc-e823f922196b/swift-ring-rebalance/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.792574 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-auditor/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.802663 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-reaper/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.984100 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-replicator/0.log" Nov 23 15:52:08 crc kubenswrapper[4718]: I1123 15:52:08.990572 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/account-server/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.002522 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-auditor/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.088196 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-replicator/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.169190 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-updater/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.233659 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/container-server/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.244200 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-auditor/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.824586 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-expirer/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.830794 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-server/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.845417 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-replicator/0.log" Nov 23 15:52:09 crc kubenswrapper[4718]: I1123 15:52:09.846326 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/object-updater/0.log" Nov 23 15:52:10 crc kubenswrapper[4718]: I1123 15:52:10.021624 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/swift-recon-cron/0.log" Nov 23 15:52:10 crc kubenswrapper[4718]: I1123 15:52:10.033898 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ef94753b-867a-4e46-9ff8-66178f25efaa/rsync/0.log" Nov 23 15:52:10 crc kubenswrapper[4718]: I1123 15:52:10.159954 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-n6v68_34a8617a-3a87-48ad-b752-b324eaac4afe/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:10 crc kubenswrapper[4718]: I1123 15:52:10.225112 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2053f5ea-ae54-4b1d-951f-2355f69f1062/tempest-tests-tempest-tests-runner/0.log" Nov 23 15:52:10 crc kubenswrapper[4718]: I1123 15:52:10.393317 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_eebef478-ad56-44b6-8ecf-20cc943f86b3/test-operator-logs-container/0.log" Nov 23 15:52:10 crc kubenswrapper[4718]: I1123 15:52:10.540768 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tbm9m_a5d2bb5d-31ec-4fcf-b315-d2522b6e3f6f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 15:52:21 crc kubenswrapper[4718]: I1123 15:52:21.381225 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b5324d05-32a6-4859-9288-de1f3bd9389d/memcached/0.log" Nov 23 15:52:35 crc kubenswrapper[4718]: I1123 15:52:35.494651 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-kwnhv_e402a0ac-7f35-4bab-9948-b664c0ef9636/kube-rbac-proxy/0.log" Nov 23 15:52:35 crc kubenswrapper[4718]: I1123 15:52:35.581143 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-kwnhv_e402a0ac-7f35-4bab-9948-b664c0ef9636/manager/0.log" Nov 23 15:52:35 crc kubenswrapper[4718]: I1123 15:52:35.719669 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-k82wv_b5aad852-89aa-459e-8771-50ef010620ef/kube-rbac-proxy/0.log" Nov 23 15:52:35 crc kubenswrapper[4718]: I1123 15:52:35.760560 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-k82wv_b5aad852-89aa-459e-8771-50ef010620ef/manager/0.log" Nov 23 15:52:35 crc kubenswrapper[4718]: I1123 15:52:35.878852 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-v5dn9_ce59b10a-2110-44a2-9489-b1e06f6a1032/kube-rbac-proxy/0.log" Nov 23 15:52:36 crc kubenswrapper[4718]: I1123 15:52:36.612318 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-v5dn9_ce59b10a-2110-44a2-9489-b1e06f6a1032/manager/0.log" Nov 23 15:52:36 crc kubenswrapper[4718]: I1123 15:52:36.639565 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/util/0.log" Nov 23 15:52:36 crc kubenswrapper[4718]: I1123 15:52:36.744795 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/util/0.log" Nov 23 15:52:36 crc kubenswrapper[4718]: I1123 15:52:36.785982 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/pull/0.log" Nov 23 15:52:36 crc kubenswrapper[4718]: I1123 15:52:36.839203 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/pull/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.012068 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/util/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.026109 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/pull/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.049405 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e27345db6007af157f9c9720a7d718ea21605eb25c2fa6eb354135e7a55p6kn_50ab9973-a819-4833-aaab-955e5d2eb560/extract/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.207176 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-h4hzc_960d1cfd-fc93-466c-8590-723c68c0bc05/kube-rbac-proxy/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.270484 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-k88hn_1b5f1764-1a63-4fda-988c-49a8bc17fe79/kube-rbac-proxy/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.281467 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-h4hzc_960d1cfd-fc93-466c-8590-723c68c0bc05/manager/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.448415 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-k88hn_1b5f1764-1a63-4fda-988c-49a8bc17fe79/manager/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.474577 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-8trcx_f4618913-9a14-4f47-89ec-9c4b0a931434/kube-rbac-proxy/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.512243 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-8trcx_f4618913-9a14-4f47-89ec-9c4b0a931434/manager/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.667663 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-h8b4l_b72e1603-d77f-4edc-87a2-3cc5469620fe/kube-rbac-proxy/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.851824 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-h8b4l_b72e1603-d77f-4edc-87a2-3cc5469620fe/manager/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.870399 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-rw4vq_06302b9c-68a3-4b48-88d7-cc0885ca0156/kube-rbac-proxy/0.log" Nov 23 15:52:37 crc kubenswrapper[4718]: I1123 15:52:37.907157 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-rw4vq_06302b9c-68a3-4b48-88d7-cc0885ca0156/manager/0.log" Nov 23 15:52:38 crc kubenswrapper[4718]: I1123 15:52:38.042423 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-4g92d_7cc68ab9-c26a-437b-adcd-977eb063fe25/kube-rbac-proxy/0.log" Nov 23 15:52:38 crc kubenswrapper[4718]: I1123 15:52:38.097232 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-4g92d_7cc68ab9-c26a-437b-adcd-977eb063fe25/manager/0.log" Nov 23 15:52:38 crc kubenswrapper[4718]: I1123 15:52:38.684027 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-47csv_020fa89c-9d76-439c-aee1-0843636d4469/kube-rbac-proxy/0.log" Nov 23 15:52:38 crc kubenswrapper[4718]: I1123 15:52:38.742961 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-47csv_020fa89c-9d76-439c-aee1-0843636d4469/manager/0.log" Nov 23 15:52:38 crc kubenswrapper[4718]: I1123 15:52:38.888291 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-6785w_0d78d642-939c-47e3-8d60-665dff178d44/kube-rbac-proxy/0.log" Nov 23 15:52:38 crc kubenswrapper[4718]: I1123 15:52:38.931161 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-6785w_0d78d642-939c-47e3-8d60-665dff178d44/manager/0.log" Nov 23 15:52:38 crc kubenswrapper[4718]: I1123 15:52:38.996215 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-g72r5_0b0e1ffa-6dff-4523-911c-ad0744bd9153/kube-rbac-proxy/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.123793 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-g72r5_0b0e1ffa-6dff-4523-911c-ad0744bd9153/manager/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.181796 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-q9nlr_934a178a-2178-4c2d-bda8-9bb817f78644/kube-rbac-proxy/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.201606 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kr6x7"] Nov 23 15:52:39 crc kubenswrapper[4718]: E1123 15:52:39.202002 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="registry-server" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.202021 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="registry-server" Nov 23 15:52:39 crc kubenswrapper[4718]: E1123 15:52:39.202049 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="extract-utilities" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.202056 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="extract-utilities" Nov 23 15:52:39 crc kubenswrapper[4718]: E1123 15:52:39.202077 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e733e024-9d77-4ec6-ba06-96cf418c5146" containerName="container-00" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.202083 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e733e024-9d77-4ec6-ba06-96cf418c5146" containerName="container-00" Nov 23 15:52:39 crc kubenswrapper[4718]: E1123 15:52:39.202113 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="extract-content" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.202120 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="extract-content" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.202292 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5c44d3-332f-47d8-8297-179f845791e0" containerName="registry-server" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.202316 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e733e024-9d77-4ec6-ba06-96cf418c5146" containerName="container-00" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.203629 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.213498 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kr6x7"] Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.250245 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-catalog-content\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.250435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-utilities\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.250529 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwp2\" (UniqueName: \"kubernetes.io/projected/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-kube-api-access-pcwp2\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.302873 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-q9nlr_934a178a-2178-4c2d-bda8-9bb817f78644/manager/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.354958 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-utilities\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.355041 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcwp2\" (UniqueName: \"kubernetes.io/projected/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-kube-api-access-pcwp2\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.355161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-catalog-content\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.355418 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-utilities\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.355926 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-catalog-content\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.392763 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcwp2\" (UniqueName: \"kubernetes.io/projected/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-kube-api-access-pcwp2\") pod \"community-operators-kr6x7\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.416421 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-kpg5q_968c85cb-d53b-40e8-9651-7127fc58f61a/kube-rbac-proxy/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.483547 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-kpg5q_968c85cb-d53b-40e8-9651-7127fc58f61a/manager/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.549421 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.850349 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk_ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d/kube-rbac-proxy/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.940917 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-669b8498dc-8hbzm_2f871886-2351-4861-a1d6-3f7711fa936e/kube-rbac-proxy/0.log" Nov 23 15:52:39 crc kubenswrapper[4718]: I1123 15:52:39.959889 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-9jkvk_ae8cdd4c-2e47-4c34-b19a-ebe32f80fe3d/manager/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.147483 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-597d69585c-9tk5p_5b1ab78b-6600-4c1f-a302-f3b0369892c2/kube-rbac-proxy/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.166609 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kr6x7"] Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.391088 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j47c5_6fbfe838-c27e-484d-a610-882fbb719e14/registry-server/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.397613 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-597d69585c-9tk5p_5b1ab78b-6600-4c1f-a302-f3b0369892c2/operator/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.501400 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-5wtcx_306074ad-d60e-41a2-975b-901d8874be23/kube-rbac-proxy/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.678122 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-5wtcx_306074ad-d60e-41a2-975b-901d8874be23/manager/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.697825 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-7wn4t_9d46e777-1d50-42a0-b20f-a24b155a0e43/kube-rbac-proxy/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.875697 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-7wn4t_9d46e777-1d50-42a0-b20f-a24b155a0e43/manager/0.log" Nov 23 15:52:40 crc kubenswrapper[4718]: I1123 15:52:40.922126 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-bnrmb_76e89747-f3cb-45cd-beff-22193095b455/operator/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.142482 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-p7dq2_313f2889-e11a-440a-8358-612780f4a348/kube-rbac-proxy/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.147748 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd4b8f91-98c8-4974-bee2-6b339d5a9eae" containerID="09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d" exitCode=0 Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.147809 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr6x7" event={"ID":"fd4b8f91-98c8-4974-bee2-6b339d5a9eae","Type":"ContainerDied","Data":"09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d"} Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.147841 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr6x7" event={"ID":"fd4b8f91-98c8-4974-bee2-6b339d5a9eae","Type":"ContainerStarted","Data":"4239484eb12f6dba82e6d752719acdc4acb290066a4e71bdf8a76104fa898019"} Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.201264 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-p7dq2_313f2889-e11a-440a-8358-612780f4a348/manager/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.204931 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-qtpps_424e5dfa-98a5-480c-aeb9-8f279b2fdee4/kube-rbac-proxy/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.248235 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-669b8498dc-8hbzm_2f871886-2351-4861-a1d6-3f7711fa936e/manager/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.376979 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kt77s_8e9db0b8-bd2b-45fa-8105-2524e81bcd70/kube-rbac-proxy/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.388097 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-qtpps_424e5dfa-98a5-480c-aeb9-8f279b2fdee4/manager/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.431564 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kt77s_8e9db0b8-bd2b-45fa-8105-2524e81bcd70/manager/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.548502 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-bfnhk_e3358e41-4842-4768-8235-96a8166d43b0/kube-rbac-proxy/0.log" Nov 23 15:52:41 crc kubenswrapper[4718]: I1123 15:52:41.577322 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-bfnhk_e3358e41-4842-4768-8235-96a8166d43b0/manager/0.log" Nov 23 15:52:43 crc kubenswrapper[4718]: I1123 15:52:43.167542 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd4b8f91-98c8-4974-bee2-6b339d5a9eae" containerID="9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6" exitCode=0 Nov 23 15:52:43 crc kubenswrapper[4718]: I1123 15:52:43.167622 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr6x7" event={"ID":"fd4b8f91-98c8-4974-bee2-6b339d5a9eae","Type":"ContainerDied","Data":"9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6"} Nov 23 15:52:44 crc kubenswrapper[4718]: I1123 15:52:44.178640 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr6x7" event={"ID":"fd4b8f91-98c8-4974-bee2-6b339d5a9eae","Type":"ContainerStarted","Data":"1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97"} Nov 23 15:52:44 crc kubenswrapper[4718]: I1123 15:52:44.204007 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kr6x7" podStartSLOduration=2.758652042 podStartE2EDuration="5.20398636s" podCreationTimestamp="2025-11-23 15:52:39 +0000 UTC" firstStartedPulling="2025-11-23 15:52:41.151096323 +0000 UTC m=+4012.390716157" lastFinishedPulling="2025-11-23 15:52:43.596430631 +0000 UTC m=+4014.836050475" observedRunningTime="2025-11-23 15:52:44.1965223 +0000 UTC m=+4015.436142154" watchObservedRunningTime="2025-11-23 15:52:44.20398636 +0000 UTC m=+4015.443606204" Nov 23 15:52:49 crc kubenswrapper[4718]: I1123 15:52:49.549900 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:49 crc kubenswrapper[4718]: I1123 15:52:49.550380 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:49 crc kubenswrapper[4718]: I1123 15:52:49.606602 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:50 crc kubenswrapper[4718]: I1123 15:52:50.276049 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:50 crc kubenswrapper[4718]: I1123 15:52:50.324197 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kr6x7"] Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.249071 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kr6x7" podUID="fd4b8f91-98c8-4974-bee2-6b339d5a9eae" containerName="registry-server" containerID="cri-o://1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97" gracePeriod=2 Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.740397 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.818213 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-catalog-content\") pod \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.818326 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcwp2\" (UniqueName: \"kubernetes.io/projected/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-kube-api-access-pcwp2\") pod \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.818351 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-utilities\") pod \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\" (UID: \"fd4b8f91-98c8-4974-bee2-6b339d5a9eae\") " Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.819245 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-utilities" (OuterVolumeSpecName: "utilities") pod "fd4b8f91-98c8-4974-bee2-6b339d5a9eae" (UID: "fd4b8f91-98c8-4974-bee2-6b339d5a9eae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.825493 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-kube-api-access-pcwp2" (OuterVolumeSpecName: "kube-api-access-pcwp2") pod "fd4b8f91-98c8-4974-bee2-6b339d5a9eae" (UID: "fd4b8f91-98c8-4974-bee2-6b339d5a9eae"). InnerVolumeSpecName "kube-api-access-pcwp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.875518 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd4b8f91-98c8-4974-bee2-6b339d5a9eae" (UID: "fd4b8f91-98c8-4974-bee2-6b339d5a9eae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.920388 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.920415 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcwp2\" (UniqueName: \"kubernetes.io/projected/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-kube-api-access-pcwp2\") on node \"crc\" DevicePath \"\"" Nov 23 15:52:52 crc kubenswrapper[4718]: I1123 15:52:52.920424 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4b8f91-98c8-4974-bee2-6b339d5a9eae-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.053515 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.053576 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.263195 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd4b8f91-98c8-4974-bee2-6b339d5a9eae" containerID="1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97" exitCode=0 Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.263258 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr6x7" event={"ID":"fd4b8f91-98c8-4974-bee2-6b339d5a9eae","Type":"ContainerDied","Data":"1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97"} Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.263300 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr6x7" event={"ID":"fd4b8f91-98c8-4974-bee2-6b339d5a9eae","Type":"ContainerDied","Data":"4239484eb12f6dba82e6d752719acdc4acb290066a4e71bdf8a76104fa898019"} Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.263330 4718 scope.go:117] "RemoveContainer" containerID="1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.263329 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr6x7" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.286687 4718 scope.go:117] "RemoveContainer" containerID="9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.307056 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kr6x7"] Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.314773 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kr6x7"] Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.328653 4718 scope.go:117] "RemoveContainer" containerID="09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.392043 4718 scope.go:117] "RemoveContainer" containerID="1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97" Nov 23 15:52:53 crc kubenswrapper[4718]: E1123 15:52:53.392576 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97\": container with ID starting with 1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97 not found: ID does not exist" containerID="1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.392628 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97"} err="failed to get container status \"1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97\": rpc error: code = NotFound desc = could not find container \"1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97\": container with ID starting with 1ad0f527126f3ede21ea1297d15c114827563fdfc313a4ab4fad3351fe2b6f97 not found: ID does not exist" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.392661 4718 scope.go:117] "RemoveContainer" containerID="9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6" Nov 23 15:52:53 crc kubenswrapper[4718]: E1123 15:52:53.393145 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6\": container with ID starting with 9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6 not found: ID does not exist" containerID="9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.393184 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6"} err="failed to get container status \"9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6\": rpc error: code = NotFound desc = could not find container \"9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6\": container with ID starting with 9bfc41aeed32b8fb2593719dc50e871780bce04627db845133e8fe000fe1cba6 not found: ID does not exist" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.393210 4718 scope.go:117] "RemoveContainer" containerID="09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d" Nov 23 15:52:53 crc kubenswrapper[4718]: E1123 15:52:53.393556 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d\": container with ID starting with 09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d not found: ID does not exist" containerID="09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d" Nov 23 15:52:53 crc kubenswrapper[4718]: I1123 15:52:53.393588 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d"} err="failed to get container status \"09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d\": rpc error: code = NotFound desc = could not find container \"09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d\": container with ID starting with 09f27bd1177d919adb145c9a391e9475c4e35e6788b85ae650bff95b6c1d599d not found: ID does not exist" Nov 23 15:52:54 crc kubenswrapper[4718]: I1123 15:52:54.453890 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4b8f91-98c8-4974-bee2-6b339d5a9eae" path="/var/lib/kubelet/pods/fd4b8f91-98c8-4974-bee2-6b339d5a9eae/volumes" Nov 23 15:52:58 crc kubenswrapper[4718]: I1123 15:52:58.405078 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8xb95_6bfbc7c8-2c86-4bf8-87b6-6eec240a5e2e/control-plane-machine-set-operator/0.log" Nov 23 15:52:58 crc kubenswrapper[4718]: I1123 15:52:58.438910 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7vdf5_95464c4e-4616-4ab3-9928-4dc41beee4af/kube-rbac-proxy/0.log" Nov 23 15:52:58 crc kubenswrapper[4718]: I1123 15:52:58.595705 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7vdf5_95464c4e-4616-4ab3-9928-4dc41beee4af/machine-api-operator/0.log" Nov 23 15:53:10 crc kubenswrapper[4718]: I1123 15:53:10.364139 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-g6g5c_9c7e4ce6-8467-4656-9451-4ca2cf5f05e3/cert-manager-controller/0.log" Nov 23 15:53:10 crc kubenswrapper[4718]: I1123 15:53:10.537700 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-dlx5n_e6b032f0-b0b8-4db8-af64-ac70e535c9e7/cert-manager-cainjector/0.log" Nov 23 15:53:10 crc kubenswrapper[4718]: I1123 15:53:10.594500 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-wtn4p_006f97d3-c32d-4175-b6d1-41f25d854d69/cert-manager-webhook/0.log" Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.052924 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.053543 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.220939 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-lvlqg_36e9d48f-9291-4ffd-8ca3-342d488e8bc2/nmstate-console-plugin/0.log" Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.399103 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-z9n6m_a6e7f145-b630-4486-9a8d-e08d114c3f0a/nmstate-handler/0.log" Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.466164 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-65tj8_955636b8-9879-4e63-a399-6ac037c1fcd5/kube-rbac-proxy/0.log" Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.481967 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-65tj8_955636b8-9879-4e63-a399-6ac037c1fcd5/nmstate-metrics/0.log" Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.595928 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-vj25n_62a579ff-5a89-4c20-afea-9419bd3bc1a0/nmstate-operator/0.log" Nov 23 15:53:23 crc kubenswrapper[4718]: I1123 15:53:23.692841 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-t5mgc_21067045-a7f2-4d80-a1fe-d2c15d3b7ee9/nmstate-webhook/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.140405 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-n4n8k_28328d3c-1a5b-45f9-a606-9f604db00a0a/kube-rbac-proxy/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.212991 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-n4n8k_28328d3c-1a5b-45f9-a606-9f604db00a0a/controller/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.577689 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.747629 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.777016 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.777181 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.796297 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.946625 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.954239 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:53:37 crc kubenswrapper[4718]: I1123 15:53:37.961003 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.005647 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.178241 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-frr-files/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.183658 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-metrics/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.202034 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/cp-reloader/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.211698 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/controller/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.368278 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/frr-metrics/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.369539 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/kube-rbac-proxy/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.412161 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/kube-rbac-proxy-frr/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.652210 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-z66k9_5cd190ff-29e1-4d41-9687-57c554820cb4/frr-k8s-webhook-server/0.log" Nov 23 15:53:38 crc kubenswrapper[4718]: I1123 15:53:38.654959 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/reloader/0.log" Nov 23 15:53:39 crc kubenswrapper[4718]: I1123 15:53:39.327466 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-768fb95d78-mp5lj_44722e76-1f31-46d4-b765-abd86f655b27/manager/0.log" Nov 23 15:53:39 crc kubenswrapper[4718]: I1123 15:53:39.598370 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bf7789474-dgc67_0e181755-dfbb-4608-b061-cbb0e95d6f95/webhook-server/0.log" Nov 23 15:53:39 crc kubenswrapper[4718]: I1123 15:53:39.599286 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-65gj6_e56ba209-dfa7-4f05-b9fd-e156af86cd9f/kube-rbac-proxy/0.log" Nov 23 15:53:39 crc kubenswrapper[4718]: I1123 15:53:39.701865 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hrx_ced693f2-68ad-4fd1-ab33-c03e8d9fcc56/frr/0.log" Nov 23 15:53:40 crc kubenswrapper[4718]: I1123 15:53:40.111943 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-65gj6_e56ba209-dfa7-4f05-b9fd-e156af86cd9f/speaker/0.log" Nov 23 15:53:51 crc kubenswrapper[4718]: I1123 15:53:51.689087 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/util/0.log" Nov 23 15:53:51 crc kubenswrapper[4718]: I1123 15:53:51.988073 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/util/0.log" Nov 23 15:53:51 crc kubenswrapper[4718]: I1123 15:53:51.999718 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/pull/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.096769 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/pull/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.194096 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/util/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.217639 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/extract/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.219005 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772exdqm2_363974b8-229c-43e4-85cf-a7e4187ed8d4/pull/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.374381 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-utilities/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.621732 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-utilities/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.626818 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-content/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.642081 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-content/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.825582 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-utilities/0.log" Nov 23 15:53:52 crc kubenswrapper[4718]: I1123 15:53:52.849559 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/extract-content/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.048766 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-utilities/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.054538 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.054587 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.054632 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.055205 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8494e2961df6cbfe52cd4396ebd60e3b134bb680314d399effe7c36a101ad7dc"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.055279 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://8494e2961df6cbfe52cd4396ebd60e3b134bb680314d399effe7c36a101ad7dc" gracePeriod=600 Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.228085 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-content/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.279127 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-utilities/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.302696 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-content/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.400914 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nb5q4_12f7b2fd-ebf7-4c33-8da3-bfd473790e77/registry-server/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.485240 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-content/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.491893 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/extract-utilities/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.748036 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/util/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.812073 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="8494e2961df6cbfe52cd4396ebd60e3b134bb680314d399effe7c36a101ad7dc" exitCode=0 Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.812125 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"8494e2961df6cbfe52cd4396ebd60e3b134bb680314d399effe7c36a101ad7dc"} Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.812161 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerStarted","Data":"c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb"} Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.812185 4718 scope.go:117] "RemoveContainer" containerID="a6961801b3040b66bc4fb1eadcaa53ff84ab9f391d2daecd7e74098dd4e39757" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.917667 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/pull/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.939558 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/util/0.log" Nov 23 15:53:53 crc kubenswrapper[4718]: I1123 15:53:53.954995 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/pull/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.108895 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7wmpb_7dffe07d-8aa8-46b3-a5a5-28d8152d6df3/registry-server/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.124998 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/util/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.154360 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/extract/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.195707 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bksmx_756bc291-abf5-4395-8e55-5140aae72299/pull/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.361383 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rdkgp_33a0a6b4-7aa0-4718-80f1-2d13fae9e761/marketplace-operator/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.426377 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-utilities/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.603606 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-content/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.604237 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-utilities/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.605558 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-content/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.757592 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-utilities/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.799799 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/extract-content/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.965915 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pn9qf_13ce74fe-0bed-4d92-9587-e933c3a3b03c/registry-server/0.log" Nov 23 15:53:54 crc kubenswrapper[4718]: I1123 15:53:54.995070 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-utilities/0.log" Nov 23 15:53:55 crc kubenswrapper[4718]: I1123 15:53:55.134811 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-utilities/0.log" Nov 23 15:53:55 crc kubenswrapper[4718]: I1123 15:53:55.165660 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-content/0.log" Nov 23 15:53:55 crc kubenswrapper[4718]: I1123 15:53:55.169273 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-content/0.log" Nov 23 15:53:55 crc kubenswrapper[4718]: I1123 15:53:55.311474 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-utilities/0.log" Nov 23 15:53:55 crc kubenswrapper[4718]: I1123 15:53:55.348074 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/extract-content/0.log" Nov 23 15:53:55 crc kubenswrapper[4718]: I1123 15:53:55.826940 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjtk6_81743302-97b1-40c2-953f-070a5b775a74/registry-server/0.log" Nov 23 15:54:21 crc kubenswrapper[4718]: E1123 15:54:21.383009 4718 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.2:43688->38.102.83.2:39303: read tcp 38.102.83.2:43688->38.102.83.2:39303: read: connection reset by peer Nov 23 15:55:34 crc kubenswrapper[4718]: I1123 15:55:34.838513 4718 generic.go:334] "Generic (PLEG): container finished" podID="e17db7d9-eb38-46ab-ba44-befa4ad50685" containerID="12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c" exitCode=0 Nov 23 15:55:34 crc kubenswrapper[4718]: I1123 15:55:34.838631 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7qj2/must-gather-h69sp" event={"ID":"e17db7d9-eb38-46ab-ba44-befa4ad50685","Type":"ContainerDied","Data":"12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c"} Nov 23 15:55:34 crc kubenswrapper[4718]: I1123 15:55:34.839518 4718 scope.go:117] "RemoveContainer" containerID="12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c" Nov 23 15:55:35 crc kubenswrapper[4718]: I1123 15:55:35.703239 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7qj2_must-gather-h69sp_e17db7d9-eb38-46ab-ba44-befa4ad50685/gather/0.log" Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.353269 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7qj2/must-gather-h69sp"] Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.354074 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x7qj2/must-gather-h69sp" podUID="e17db7d9-eb38-46ab-ba44-befa4ad50685" containerName="copy" containerID="cri-o://23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba" gracePeriod=2 Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.371422 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7qj2/must-gather-h69sp"] Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.786143 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7qj2_must-gather-h69sp_e17db7d9-eb38-46ab-ba44-befa4ad50685/copy/0.log" Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.786987 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.933106 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7qj2_must-gather-h69sp_e17db7d9-eb38-46ab-ba44-befa4ad50685/copy/0.log" Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.933497 4718 generic.go:334] "Generic (PLEG): container finished" podID="e17db7d9-eb38-46ab-ba44-befa4ad50685" containerID="23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba" exitCode=143 Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.933555 4718 scope.go:117] "RemoveContainer" containerID="23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba" Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.933591 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7qj2/must-gather-h69sp" Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.938482 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp9jg\" (UniqueName: \"kubernetes.io/projected/e17db7d9-eb38-46ab-ba44-befa4ad50685-kube-api-access-hp9jg\") pod \"e17db7d9-eb38-46ab-ba44-befa4ad50685\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.938547 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e17db7d9-eb38-46ab-ba44-befa4ad50685-must-gather-output\") pod \"e17db7d9-eb38-46ab-ba44-befa4ad50685\" (UID: \"e17db7d9-eb38-46ab-ba44-befa4ad50685\") " Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.947086 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17db7d9-eb38-46ab-ba44-befa4ad50685-kube-api-access-hp9jg" (OuterVolumeSpecName: "kube-api-access-hp9jg") pod "e17db7d9-eb38-46ab-ba44-befa4ad50685" (UID: "e17db7d9-eb38-46ab-ba44-befa4ad50685"). InnerVolumeSpecName "kube-api-access-hp9jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 15:55:45 crc kubenswrapper[4718]: I1123 15:55:45.960585 4718 scope.go:117] "RemoveContainer" containerID="12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.040207 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp9jg\" (UniqueName: \"kubernetes.io/projected/e17db7d9-eb38-46ab-ba44-befa4ad50685-kube-api-access-hp9jg\") on node \"crc\" DevicePath \"\"" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.073674 4718 scope.go:117] "RemoveContainer" containerID="23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba" Nov 23 15:55:46 crc kubenswrapper[4718]: E1123 15:55:46.074110 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba\": container with ID starting with 23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba not found: ID does not exist" containerID="23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.074142 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba"} err="failed to get container status \"23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba\": rpc error: code = NotFound desc = could not find container \"23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba\": container with ID starting with 23fc6affa856442b0cf2395664540bf8a34da3b4828382182157da478f2154ba not found: ID does not exist" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.074176 4718 scope.go:117] "RemoveContainer" containerID="12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c" Nov 23 15:55:46 crc kubenswrapper[4718]: E1123 15:55:46.074404 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c\": container with ID starting with 12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c not found: ID does not exist" containerID="12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.074429 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c"} err="failed to get container status \"12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c\": rpc error: code = NotFound desc = could not find container \"12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c\": container with ID starting with 12a12cca0af1ceb49d1833584ca52b2f5b47bb3192078e68d317629f26d6a68c not found: ID does not exist" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.094452 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e17db7d9-eb38-46ab-ba44-befa4ad50685-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e17db7d9-eb38-46ab-ba44-befa4ad50685" (UID: "e17db7d9-eb38-46ab-ba44-befa4ad50685"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.141989 4718 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e17db7d9-eb38-46ab-ba44-befa4ad50685-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 23 15:55:46 crc kubenswrapper[4718]: I1123 15:55:46.452533 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17db7d9-eb38-46ab-ba44-befa4ad50685" path="/var/lib/kubelet/pods/e17db7d9-eb38-46ab-ba44-befa4ad50685/volumes" Nov 23 15:55:53 crc kubenswrapper[4718]: I1123 15:55:53.052680 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:55:53 crc kubenswrapper[4718]: I1123 15:55:53.053264 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:56:23 crc kubenswrapper[4718]: I1123 15:56:23.052681 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:56:23 crc kubenswrapper[4718]: I1123 15:56:23.054504 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.052997 4718 patch_prober.go:28] interesting pod/machine-config-daemon-hkdqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.053477 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.053534 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.054164 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb"} pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.054248 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerName="machine-config-daemon" containerID="cri-o://c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" gracePeriod=600 Nov 23 15:56:53 crc kubenswrapper[4718]: E1123 15:56:53.183119 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.573001 4718 generic.go:334] "Generic (PLEG): container finished" podID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" exitCode=0 Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.573062 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" event={"ID":"c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785","Type":"ContainerDied","Data":"c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb"} Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.573109 4718 scope.go:117] "RemoveContainer" containerID="8494e2961df6cbfe52cd4396ebd60e3b134bb680314d399effe7c36a101ad7dc" Nov 23 15:56:53 crc kubenswrapper[4718]: I1123 15:56:53.573869 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:56:53 crc kubenswrapper[4718]: E1123 15:56:53.574225 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:57:05 crc kubenswrapper[4718]: I1123 15:57:05.442032 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:57:05 crc kubenswrapper[4718]: E1123 15:57:05.444348 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:57:17 crc kubenswrapper[4718]: I1123 15:57:17.441251 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:57:17 crc kubenswrapper[4718]: E1123 15:57:17.442061 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:57:29 crc kubenswrapper[4718]: I1123 15:57:29.440895 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:57:29 crc kubenswrapper[4718]: E1123 15:57:29.441478 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:57:44 crc kubenswrapper[4718]: I1123 15:57:44.442695 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:57:44 crc kubenswrapper[4718]: E1123 15:57:44.443776 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:57:58 crc kubenswrapper[4718]: I1123 15:57:58.441977 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:57:58 crc kubenswrapper[4718]: E1123 15:57:58.442931 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:58:11 crc kubenswrapper[4718]: I1123 15:58:11.441317 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:58:11 crc kubenswrapper[4718]: E1123 15:58:11.442067 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785" Nov 23 15:58:24 crc kubenswrapper[4718]: I1123 15:58:24.441038 4718 scope.go:117] "RemoveContainer" containerID="c3c4752a99a823ece3bb2e20f9fd435b1b4db934b0b2d1f33db1843641f32cfb" Nov 23 15:58:24 crc kubenswrapper[4718]: E1123 15:58:24.441863 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hkdqw_openshift-machine-config-operator(c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785)\"" pod="openshift-machine-config-operator/machine-config-daemon-hkdqw" podUID="c3d9cfca-3d2e-42a8-9fd9-2d6c7772b785"